Add Grafana Alloy configuration files and update examples
- Introduced detailed configuration guidelines in alloy.instructions.md - Added general instructions for project structure in general.instructions.md - Created config.yaml for NAMU-PC with target hostnames - Implemented example.alloy and openwrt.alloy for service discovery and scraping - Added alloy_seed.json for initial configuration state - Developed demo.alloy for comprehensive monitoring setup - Established std.alloy for repository path formatting and host configuration loading - Updated test.alloy to utilize new host configuration loading
This commit is contained in:
114
.github/instructions/alloy.instructions.md
vendored
Normal file
114
.github/instructions/alloy.instructions.md
vendored
Normal file
@@ -0,0 +1,114 @@
|
|||||||
|
---
|
||||||
|
applyTo: '**/*.alloy'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Grafana Alloy Configuration Guidelines
|
||||||
|
|
||||||
|
## Component Naming Conventions
|
||||||
|
|
||||||
|
### Discovery Components
|
||||||
|
- HTTP discovery: `{service}_exporter_dynamic`
|
||||||
|
- Relabel discovery: `{service}_exporter_with_cluster`
|
||||||
|
|
||||||
|
### Scrape Jobs
|
||||||
|
- Component name: `metrics_integrations_integrations_{service}`
|
||||||
|
- Job name: `integrations/{service}`
|
||||||
|
|
||||||
|
### Remote Write
|
||||||
|
- Component name: `metrics_service`
|
||||||
|
- Always forward to Grafana Cloud endpoint
|
||||||
|
|
||||||
|
## Configuration Pipeline Pattern
|
||||||
|
|
||||||
|
Every monitoring target must follow this 3-stage pipeline:
|
||||||
|
|
||||||
|
```alloy
|
||||||
|
// 1. Discovery Stage
|
||||||
|
discovery.http "{service}_exporter_dynamic" {
|
||||||
|
url = "http://{service}-{env}.{domain}:8765/sd/prometheus/sd-config?service={service}-exporter"
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Labeling Stage
|
||||||
|
discovery.relabel "{service}_exporter_with_cluster" {
|
||||||
|
targets = discovery.http.{service}_exporter_dynamic.targets
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "{service}_cluster" // or appropriate cluster label
|
||||||
|
replacement = "{environment_id}" // hardcoded environment identifier
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Scraping Stage
|
||||||
|
prometheus.scrape "metrics_integrations_integrations_{service}" {
|
||||||
|
targets = discovery.relabel.{service}_exporter_with_cluster.output
|
||||||
|
forward_to = [prometheus.remote_write.metrics_service.receiver]
|
||||||
|
job_name = "integrations/{service}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Environment Configuration
|
||||||
|
|
||||||
|
### Service Discovery URLs
|
||||||
|
- Pattern: `http://{service}-{env}.{domain}:8765/sd/prometheus/sd-config?service={service}-exporter`
|
||||||
|
- Port 8765 is standard for HTTP SD endpoints
|
||||||
|
- Always use query parameter `service={service}-exporter`
|
||||||
|
|
||||||
|
### Authentication & Secrets
|
||||||
|
- Use `sys.env("VARIABLE_NAME")` for sensitive data
|
||||||
|
- Standard variables:
|
||||||
|
- `GCLOUD_RW_API_KEY` for Grafana Cloud API key
|
||||||
|
- Never hardcode passwords or API keys
|
||||||
|
|
||||||
|
### Cluster Labeling
|
||||||
|
- Always add cluster/environment labels via `discovery.relabel`
|
||||||
|
- Use descriptive cluster names (e.g., "gr7", "prod", "staging")
|
||||||
|
- Cluster labels help with multi-environment visibility
|
||||||
|
|
||||||
|
## Remote Write Configuration
|
||||||
|
|
||||||
|
Standard remote write configuration:
|
||||||
|
|
||||||
|
```alloy
|
||||||
|
prometheus.remote_write "metrics_service" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "1257735" // Grafana instance ID
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Code Style Guidelines
|
||||||
|
|
||||||
|
### Comments
|
||||||
|
- Add descriptive comments above each component
|
||||||
|
- Explain the purpose, not the syntax
|
||||||
|
- Use format: `// {Action description}`
|
||||||
|
|
||||||
|
### Formatting
|
||||||
|
- Use tabs for indentation
|
||||||
|
- One empty line between components
|
||||||
|
- Align parameters vertically when reasonable
|
||||||
|
|
||||||
|
### Component References
|
||||||
|
- Always reference by full component path: `discovery.http.component_name.targets`
|
||||||
|
- Use descriptive variable names in targets/forward_to chains
|
||||||
|
|
||||||
|
## Error Prevention
|
||||||
|
|
||||||
|
### Common Mistakes to Avoid
|
||||||
|
- Don't hardcode service discovery URLs without environment variables
|
||||||
|
- Don't skip the relabel stage - always add cluster labels
|
||||||
|
- Don't use generic job names - follow `integrations/{service}` pattern
|
||||||
|
- Don't forget to forward metrics to remote write endpoint
|
||||||
|
|
||||||
|
### Validation Checklist
|
||||||
|
- [ ] Service discovery URL uses correct pattern
|
||||||
|
- [ ] Relabel adds appropriate cluster/environment labels
|
||||||
|
- [ ] Scrape job follows naming convention
|
||||||
|
- [ ] Metrics are forwarded to remote write
|
||||||
|
- [ ] No hardcoded secrets
|
||||||
|
- [ ] Comments explain component purpose
|
126
.github/instructions/general.instructions.md
vendored
Normal file
126
.github/instructions/general.instructions.md
vendored
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
---
|
||||||
|
applyTo: '**'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Grafana Alloy Configuration Project
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
|
||||||
|
This project contains Grafana Alloy configurations for monitoring infrastructure across multiple environments. The architecture supports dynamic service discovery with environment-specific labeling and centralized metrics collection.
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
### Environment Organization
|
||||||
|
```
|
||||||
|
/
|
||||||
|
├── README.md
|
||||||
|
├── .github/
|
||||||
|
│ └── instructions/
|
||||||
|
├── {Environment}/
|
||||||
|
│ └── {environment}.alloy
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Naming
|
||||||
|
- Use descriptive directory names (e.g., `OpenWRT/`, `Production/`, `Staging/`)
|
||||||
|
- One `.alloy` file per environment/deployment target
|
||||||
|
- File names should match environment purpose (e.g., `openwrt.alloy`, `production.alloy`)
|
||||||
|
|
||||||
|
## Monitoring Domains
|
||||||
|
|
||||||
|
### Current Implementations
|
||||||
|
- **Storage Monitoring**: Ceph cluster monitoring with dynamic discovery
|
||||||
|
- **Network Infrastructure**: OpenWRT-based network monitoring
|
||||||
|
|
||||||
|
### Adding New Domains
|
||||||
|
When expanding to new monitoring domains:
|
||||||
|
1. Create environment-specific directory if needed
|
||||||
|
2. Follow the established discovery → relabel → scrape pipeline
|
||||||
|
3. Ensure proper integration with existing remote write configuration
|
||||||
|
4. Add appropriate documentation
|
||||||
|
|
||||||
|
## Environment Configuration Strategy
|
||||||
|
|
||||||
|
### Multi-Environment Support
|
||||||
|
- Each environment has isolated configuration files
|
||||||
|
- Environment-specific service discovery endpoints
|
||||||
|
- Consistent labeling strategy across environments
|
||||||
|
- Centralized metrics collection in Grafana Cloud
|
||||||
|
|
||||||
|
### Service Discovery Integration
|
||||||
|
- HTTP-based service discovery for dynamic target discovery
|
||||||
|
- Standardized SD endpoint patterns across environments
|
||||||
|
- Port 8765 as standard for HTTP SD services
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### Making Changes
|
||||||
|
1. **Identify Target Environment**: Determine which environment(s) need updates
|
||||||
|
2. **Follow Patterns**: Use existing configurations as templates
|
||||||
|
3. **Test Locally**: Validate Alloy syntax before deployment
|
||||||
|
4. **Document Changes**: Update README or comments as needed
|
||||||
|
|
||||||
|
### Adding New Services
|
||||||
|
1. **Service Discovery Setup**: Ensure HTTP SD endpoint exists
|
||||||
|
2. **Configuration Creation**: Follow 3-stage pipeline pattern
|
||||||
|
3. **Environment Labeling**: Add appropriate cluster/environment labels
|
||||||
|
4. **Integration Testing**: Verify metrics flow to Grafana Cloud
|
||||||
|
|
||||||
|
### Code Review Guidelines
|
||||||
|
- Verify naming conventions are followed
|
||||||
|
- Check for hardcoded secrets (should use environment variables)
|
||||||
|
- Ensure proper service discovery patterns
|
||||||
|
- Validate remote write configuration
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
### Secrets Management
|
||||||
|
- Never commit API keys or passwords to repository
|
||||||
|
- Use environment variables for all sensitive data
|
||||||
|
- Follow principle of least privilege for API access
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
- Service discovery endpoints should be on trusted networks
|
||||||
|
- Consider firewall rules for Alloy agents
|
||||||
|
- Use HTTPS where possible for external endpoints
|
||||||
|
|
||||||
|
## Integration Points
|
||||||
|
|
||||||
|
### Grafana Cloud
|
||||||
|
- **Metrics Storage**: Prometheus-compatible remote write
|
||||||
|
- **Authentication**: Instance ID + API key
|
||||||
|
- **Endpoint**: Fixed Grafana Cloud Prometheus URL
|
||||||
|
|
||||||
|
### Service Discovery
|
||||||
|
- **Protocol**: HTTP-based service discovery
|
||||||
|
- **Format**: Prometheus SD compatible JSON
|
||||||
|
- **Endpoints**: Environment-specific discovery services
|
||||||
|
|
||||||
|
### Monitoring Targets
|
||||||
|
- **Exporters**: Various Prometheus exporters (Ceph, Node, etc.)
|
||||||
|
- **Discovery**: Dynamic target discovery via HTTP SD
|
||||||
|
- **Labeling**: Environment and cluster-specific labels
|
||||||
|
|
||||||
|
## Documentation Standards
|
||||||
|
|
||||||
|
### File Documentation
|
||||||
|
- Each `.alloy` file should have header comments explaining purpose
|
||||||
|
- Complex configurations need inline comments
|
||||||
|
- Environment-specific notes in README sections
|
||||||
|
|
||||||
|
### Change Documentation
|
||||||
|
- Update README when adding new environments
|
||||||
|
- Document new service integrations
|
||||||
|
- Note any breaking changes or migration requirements
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Issues
|
||||||
|
- **Service Discovery Failures**: Check HTTP SD endpoint availability
|
||||||
|
- **Authentication Errors**: Verify environment variables are set
|
||||||
|
- **Missing Metrics**: Confirm scrape job configuration and forwarding
|
||||||
|
|
||||||
|
### Debug Strategies
|
||||||
|
- Use Alloy's built-in debugging and logging
|
||||||
|
- Verify service discovery target resolution
|
||||||
|
- Check Grafana Cloud metrics ingestion
|
||||||
|
- Validate network connectivity to all endpoints
|
5
OpenWRT/config.yaml
Normal file
5
OpenWRT/config.yaml
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
|
||||||
|
NAMU-PC:
|
||||||
|
targets:
|
||||||
|
- hostname: r1.kv40.kkarolis.lt
|
||||||
|
- hostname: r2.kv40.kkarolis.lt
|
19
OpenWRT/example.alloy
Normal file
19
OpenWRT/example.alloy
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
|
||||||
|
import.file "openwrt" {
|
||||||
|
filename = "./openwrt.alloy"
|
||||||
|
}
|
||||||
|
|
||||||
|
openwrt.openwrt {
|
||||||
|
forward_to = [prometheus.remote_write.metrics_service.receiver]
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.remote_write "metrics_service" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "1257735"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
7
OpenWRT/openwrt.alloy
Normal file
7
OpenWRT/openwrt.alloy
Normal file
@@ -0,0 +1,7 @@
|
|||||||
|
declare "openwrt" {
|
||||||
|
argument "forward_to" {
|
||||||
|
optinal = false
|
||||||
|
comment = "Where to forward the scraped metrics"
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
1
data-alloy/alloy_seed.json
Normal file
1
data-alloy/alloy_seed.json
Normal file
@@ -0,0 +1 @@
|
|||||||
|
{"UID":"960d41f5-980d-4a15-9eed-fa371d06f79d","created_at":"2025-08-01T16:05:56.172604+03:00","version":"v1.10.0"}
|
272
demo.alloy
Normal file
272
demo.alloy
Normal file
@@ -0,0 +1,272 @@
|
|||||||
|
declare "ceph_linux" {
|
||||||
|
// Fetch targets dynamically from the HTTP SD endpoint
|
||||||
|
discovery.http "ceph_exporter_dynamic" {
|
||||||
|
url = "http://ceph-a.gr7.kkarolis.lt:8765/sd/prometheus/sd-config?service=ceph-exporter"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add ceph_cluster label to all discovered targets
|
||||||
|
discovery.relabel "ceph_exporter_with_cluster" {
|
||||||
|
targets = discovery.http.ceph_exporter_dynamic.targets
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "ceph_cluster"
|
||||||
|
replacement = "gr7"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scrape all ceph-exporters (with ceph_cluster label)
|
||||||
|
prometheus.scrape "metrics_integrations_integrations_ceph" {
|
||||||
|
targets = discovery.relabel.ceph_exporter_with_cluster.output
|
||||||
|
forward_to = [prometheus.remote_write.metrics_service.receiver]
|
||||||
|
job_name = "integrations/ceph"
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.remote_write "metrics_service" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "1257735"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ceph_linux "default" { }
|
||||||
|
|
||||||
|
declare "linux_node_linux" {
|
||||||
|
discovery.relabel "integrations_node_exporter" {
|
||||||
|
targets = prometheus.exporter.unix.integrations_node_exporter.targets
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "instance"
|
||||||
|
replacement = constants.hostname
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "job"
|
||||||
|
replacement = "integrations/node_exporter"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.exporter.unix "integrations_node_exporter" {
|
||||||
|
disable_collectors = ["ipvs", "btrfs", "infiniband", "xfs", "zfs"]
|
||||||
|
|
||||||
|
filesystem {
|
||||||
|
fs_types_exclude = "^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|tmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$"
|
||||||
|
mount_points_exclude = "^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/)"
|
||||||
|
mount_timeout = "5s"
|
||||||
|
}
|
||||||
|
|
||||||
|
netclass {
|
||||||
|
ignored_devices = "^(veth.*|cali.*|[a-f0-9]{15})$"
|
||||||
|
}
|
||||||
|
|
||||||
|
netdev {
|
||||||
|
device_exclude = "^(veth.*|cali.*|[a-f0-9]{15})$"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.scrape "integrations_node_exporter" {
|
||||||
|
targets = discovery.relabel.integrations_node_exporter.output
|
||||||
|
forward_to = [prometheus.relabel.integrations_node_exporter.receiver]
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.relabel "integrations_node_exporter" {
|
||||||
|
forward_to = [prometheus.remote_write.metrics_service.receiver]
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__name__"]
|
||||||
|
regex = "up|node_arp_entries|node_boot_time_seconds|node_context_switches_total|node_cpu_seconds_total|node_disk_io_time_seconds_total|node_disk_io_time_weighted_seconds_total|node_disk_read_bytes_total|node_disk_read_time_seconds_total|node_disk_reads_completed_total|node_disk_write_time_seconds_total|node_disk_writes_completed_total|node_disk_written_bytes_total|node_filefd_allocated|node_filefd_maximum|node_filesystem_avail_bytes|node_filesystem_device_error|node_filesystem_files|node_filesystem_files_free|node_filesystem_readonly|node_filesystem_size_bytes|node_intr_total|node_load1|node_load15|node_load5|node_md_disks|node_md_disks_required|node_memory_Active_anon_bytes|node_memory_Active_bytes|node_memory_Active_file_bytes|node_memory_AnonHugePages_bytes|node_memory_AnonPages_bytes|node_memory_Bounce_bytes|node_memory_Buffers_bytes|node_memory_Cached_bytes|node_memory_CommitLimit_bytes|node_memory_Committed_AS_bytes|node_memory_DirectMap1G_bytes|node_memory_DirectMap2M_bytes|node_memory_DirectMap4k_bytes|node_memory_Dirty_bytes|node_memory_HugePages_Free|node_memory_HugePages_Rsvd|node_memory_HugePages_Surp|node_memory_HugePages_Total|node_memory_Hugepagesize_bytes|node_memory_Inactive_anon_bytes|node_memory_Inactive_bytes|node_memory_Inactive_file_bytes|node_memory_Mapped_bytes|node_memory_MemAvailable_bytes|node_memory_MemFree_bytes|node_memory_MemTotal_bytes|node_memory_SReclaimable_bytes|node_memory_SUnreclaim_bytes|node_memory_ShmemHugePages_bytes|node_memory_ShmemPmdMapped_bytes|node_memory_Shmem_bytes|node_memory_Slab_bytes|node_memory_SwapTotal_bytes|node_memory_VmallocChunk_bytes|node_memory_VmallocTotal_bytes|node_memory_VmallocUsed_bytes|node_memory_WritebackTmp_bytes|node_memory_Writeback_bytes|node_netstat_Icmp6_InErrors|node_netstat_Icmp6_InMsgs|node_netstat_Icmp6_OutMsgs|node_netstat_Icmp_InErrors|node_netstat_Icmp_InMsgs|node_netstat_Icmp_OutMsgs|node_netstat_IpExt_InOctets|node_netstat_IpExt_OutOctets|node_netstat_TcpExt_ListenDrops|node_netstat_TcpExt_ListenOverflows|node_netstat_TcpExt_TCPSynRetrans|node_netstat_Tcp_InErrs|node_netstat_Tcp_InSegs|node_netstat_Tcp_OutRsts|node_netstat_Tcp_OutSegs|node_netstat_Tcp_RetransSegs|node_netstat_Udp6_InDatagrams|node_netstat_Udp6_InErrors|node_netstat_Udp6_NoPorts|node_netstat_Udp6_OutDatagrams|node_netstat_Udp6_RcvbufErrors|node_netstat_Udp6_SndbufErrors|node_netstat_UdpLite_InErrors|node_netstat_Udp_InDatagrams|node_netstat_Udp_InErrors|node_netstat_Udp_NoPorts|node_netstat_Udp_OutDatagrams|node_netstat_Udp_RcvbufErrors|node_netstat_Udp_SndbufErrors|node_network_carrier|node_network_info|node_network_mtu_bytes|node_network_receive_bytes_total|node_network_receive_compressed_total|node_network_receive_drop_total|node_network_receive_errs_total|node_network_receive_fifo_total|node_network_receive_multicast_total|node_network_receive_packets_total|node_network_speed_bytes|node_network_transmit_bytes_total|node_network_transmit_compressed_total|node_network_transmit_drop_total|node_network_transmit_errs_total|node_network_transmit_fifo_total|node_network_transmit_multicast_total|node_network_transmit_packets_total|node_network_transmit_queue_length|node_network_up|node_nf_conntrack_entries|node_nf_conntrack_entries_limit|node_os_info|node_sockstat_FRAG6_inuse|node_sockstat_FRAG_inuse|node_sockstat_RAW6_inuse|node_sockstat_RAW_inuse|node_sockstat_TCP6_inuse|node_sockstat_TCP_alloc|node_sockstat_TCP_inuse|node_sockstat_TCP_mem|node_sockstat_TCP_mem_bytes|node_sockstat_TCP_orphan|node_sockstat_TCP_tw|node_sockstat_UDP6_inuse|node_sockstat_UDPLITE6_inuse|node_sockstat_UDPLITE_inuse|node_sockstat_UDP_inuse|node_sockstat_UDP_mem|node_sockstat_UDP_mem_bytes|node_sockstat_sockets_used|node_softnet_dropped_total|node_softnet_processed_total|node_softnet_times_squeezed_total|node_systemd_unit_state|node_textfile_scrape_error|node_time_zone_offset_seconds|node_timex_estimated_error_seconds|node_timex_maxerror_seconds|node_timex_offset_seconds|node_timex_sync_status|node_uname_info|node_vmstat_oom_kill|node_vmstat_pgfault|node_vmstat_pgmajfault|node_vmstat_pgpgin|node_vmstat_pgpgout|node_vmstat_pswpin|node_vmstat_pswpout|process_max_fds|process_open_fds"
|
||||||
|
action = "keep"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
prometheus.remote_write "metrics_service" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "1257735"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
loki.source.journal "logs_integrations_integrations_node_exporter_journal_scrape" {
|
||||||
|
max_age = "24h0m0s"
|
||||||
|
relabel_rules = discovery.relabel.logs_integrations_integrations_node_exporter_journal_scrape.rules
|
||||||
|
forward_to = [loki.write.grafana_cloud_loki.receiver]
|
||||||
|
}
|
||||||
|
|
||||||
|
local.file_match "logs_integrations_integrations_node_exporter_direct_scrape" {
|
||||||
|
path_targets = [{
|
||||||
|
__address__ = "localhost",
|
||||||
|
__path__ = "/var/log/{syslog,messages,*.log}",
|
||||||
|
instance = constants.hostname,
|
||||||
|
job = "integrations/node_exporter",
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
|
||||||
|
discovery.relabel "logs_integrations_integrations_node_exporter_journal_scrape" {
|
||||||
|
targets = []
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__journal__systemd_unit"]
|
||||||
|
target_label = "unit"
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__journal__boot_id"]
|
||||||
|
target_label = "boot_id"
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__journal__transport"]
|
||||||
|
target_label = "transport"
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__journal_priority_keyword"]
|
||||||
|
target_label = "level"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
loki.source.file "logs_integrations_integrations_node_exporter_direct_scrape" {
|
||||||
|
targets = local.file_match.logs_integrations_integrations_node_exporter_direct_scrape.targets
|
||||||
|
forward_to = [loki.write.grafana_cloud_loki.receiver]
|
||||||
|
}
|
||||||
|
|
||||||
|
loki.write "grafana_cloud_loki" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://logs-prod-012.grafana.net/loki/api/v1/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "727913"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
linux_node_linux "default" { }
|
||||||
|
|
||||||
|
declare "self_monitoring_logs_linux" {
|
||||||
|
// THIS IS A GENERATED REMOTE CONFIGURATION.
|
||||||
|
//
|
||||||
|
// * You can edit the contents and matchers for this configuration without them being overwritten.
|
||||||
|
// * If you delete ALL generated configurations, the latest default versions will be recreated.
|
||||||
|
// * This configuration requires the following environment variables to be set wherever alloy is running:
|
||||||
|
// * GCLOUD_RW_API_KEY: The Grafana Cloud API key with write access to Loki.
|
||||||
|
// * GCLOUD_FM_COLLECTOR_ID: A unique collector ID matching the remotecfg id argument value.
|
||||||
|
|
||||||
|
// Write logs to your Grafana Cloud Loki instance.
|
||||||
|
loki.write "grafana_cloud_loki" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://logs-prod-012.grafana.net/loki/api/v1/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "727913"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read Alloy logs when running as a systemd service with the following additional labels:
|
||||||
|
// * job: "integrations/alloy" is compatible with Grafana Cloud's Alloy Health Integrations.
|
||||||
|
// * collector_id: The unique collector ID matching the remotecfg id argument value.
|
||||||
|
// Used to match collector-specific metrics to power the 'Collector
|
||||||
|
// Health' section of the Fleet Management UI.
|
||||||
|
loki.source.journal "alloy_logs_unit" {
|
||||||
|
matches = "_SYSTEMD_UNIT=alloy.service"
|
||||||
|
forward_to = [loki.write.grafana_cloud_loki.receiver]
|
||||||
|
labels = {"job" = "integrations/alloy", "collector_id" = sys.env("GCLOUD_FM_COLLECTOR_ID")}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read Alloy logs from syslog with the following additional labels:
|
||||||
|
// * job: "integrations/alloy" is compatible with Grafana Cloud's Alloy Health Integrations.
|
||||||
|
// * collector_id: The unique collector ID matching the remotecfg id argument value.
|
||||||
|
// Used to match collector-specific metrics to power the 'Collector
|
||||||
|
// Health' section of the Fleet Management UI.
|
||||||
|
loki.source.journal "alloy_logs_tag" {
|
||||||
|
matches = "SYSLOG_IDENTIFIER=alloy"
|
||||||
|
forward_to = [loki.write.grafana_cloud_loki.receiver]
|
||||||
|
labels = {"job" = "integrations/alloy", "collector_id" = sys.env("GCLOUD_FM_COLLECTOR_ID")}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self_monitoring_logs_linux "default" { }
|
||||||
|
|
||||||
|
declare "self_monitoring_metrics" {
|
||||||
|
// THIS IS A GENERATED REMOTE CONFIGURATION.
|
||||||
|
//
|
||||||
|
// * You can edit the contents and matchers for this configuration without them being overwritten.
|
||||||
|
// * If you delete ALL generated configurations, the latest default versions will be recreated.
|
||||||
|
// * This configuration requires the following environment variables to be set wherever alloy is running:
|
||||||
|
// * GCLOUD_RW_API_KEY: The Grafana Cloud API key with write access to Loki.
|
||||||
|
// * GCLOUD_FM_COLLECTOR_ID: A unique collector ID matching the remotecfg id argument value.
|
||||||
|
|
||||||
|
// Export Alloy metrics in memory.
|
||||||
|
prometheus.exporter.self "integrations_alloy_health" { }
|
||||||
|
|
||||||
|
// Target Alloy metrics with the following additional labels:
|
||||||
|
// * job: "integrations/alloy" is compatible with Grafana Cloud's Alloy Health Integrations.
|
||||||
|
// * collector_id: The unique collector ID matching the remotecfg id argument value.
|
||||||
|
// Used to match collector-specific metrics to power the 'Collector
|
||||||
|
// Health' section of the Fleet Management UI.
|
||||||
|
// * instance: The hostname of the machine running Alloy.
|
||||||
|
discovery.relabel "integrations_alloy_health" {
|
||||||
|
targets = prometheus.exporter.self.integrations_alloy_health.targets
|
||||||
|
|
||||||
|
rule {
|
||||||
|
action = "replace"
|
||||||
|
target_label = "collector_id"
|
||||||
|
replacement = sys.env("GCLOUD_FM_COLLECTOR_ID")
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "instance"
|
||||||
|
replacement = constants.hostname
|
||||||
|
}
|
||||||
|
|
||||||
|
rule {
|
||||||
|
target_label = "job"
|
||||||
|
replacement = "integrations/alloy"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scrape Alloy metrics and forward them to the remote write component.
|
||||||
|
prometheus.scrape "integrations_alloy_health" {
|
||||||
|
targets = array.concat(
|
||||||
|
discovery.relabel.integrations_alloy_health.output,
|
||||||
|
)
|
||||||
|
forward_to = [prometheus.relabel.integrations_alloy_health.receiver]
|
||||||
|
job_name = "integrations/alloy"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Select only the metrics that are relevant to the Alloy Health Integrations.
|
||||||
|
prometheus.relabel "integrations_alloy_health" {
|
||||||
|
forward_to = [prometheus.remote_write.default.receiver]
|
||||||
|
|
||||||
|
rule {
|
||||||
|
source_labels = ["__name__"]
|
||||||
|
regex = "alloy_build_info|alloy_component_controller_running_components|alloy_component_dependencies_wait_seconds|alloy_component_dependencies_wait_seconds_bucket|alloy_component_evaluation_seconds|alloy_component_evaluation_seconds_bucket|alloy_component_evaluation_seconds_count|alloy_component_evaluation_seconds_sum|alloy_component_evaluation_slow_seconds|alloy_config_hash|alloy_resources_machine_rx_bytes_total|alloy_resources_machine_tx_bytes_total|alloy_resources_process_cpu_seconds_total|alloy_resources_process_resident_memory_bytes|cluster_node_gossip_health_score|cluster_node_gossip_proto_version|cluster_node_gossip_received_events_total|cluster_node_info|cluster_node_lamport_time|cluster_node_peers|cluster_node_update_observers|cluster_transport_rx_bytes_total|cluster_transport_rx_packet_queue_length|cluster_transport_rx_packets_failed_total|cluster_transport_rx_packets_total|cluster_transport_stream_rx_bytes_total|cluster_transport_stream_rx_packets_failed_total|cluster_transport_stream_rx_packets_total|cluster_transport_stream_tx_bytes_total|cluster_transport_stream_tx_packets_failed_total|cluster_transport_stream_tx_packets_total|cluster_transport_streams|cluster_transport_tx_bytes_total|cluster_transport_tx_packet_queue_length|cluster_transport_tx_packets_failed_total|cluster_transport_tx_packets_total|go_gc_duration_seconds_count|go_goroutines|go_memstats_heap_inuse_bytes|otelcol_exporter_send_failed_spans_total|otelcol_exporter_sent_spans_total|otelcol_processor_batch_batch_send_size_bucket|otelcol_processor_batch_metadata_cardinality|otelcol_processor_batch_timeout_trigger_send_total|otelcol_receiver_accepted_spans_total|otelcol_receiver_refused_spans_total|prometheus_remote_storage_bytes_total|prometheus_remote_storage_highest_timestamp_in_seconds|prometheus_remote_storage_metadata_bytes_total|prometheus_remote_storage_queue_highest_sent_timestamp_seconds|prometheus_remote_storage_samples_failed_total|prometheus_remote_storage_samples_retried_total|prometheus_remote_storage_samples_total|prometheus_remote_storage_sent_batch_duration_seconds_bucket|prometheus_remote_storage_sent_batch_duration_seconds_count|prometheus_remote_storage_sent_batch_duration_seconds_sum|prometheus_remote_storage_shards|prometheus_remote_storage_shards_max|prometheus_remote_storage_shards_min|prometheus_remote_write_wal_samples_appended_total|prometheus_remote_write_wal_storage_active_series|rpc_server_duration_milliseconds_bucket|scrape_duration_seconds|up"
|
||||||
|
action = "keep"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write metrics to your Grafana Cloud Prometheus instance.
|
||||||
|
prometheus.remote_write "default" {
|
||||||
|
endpoint {
|
||||||
|
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
|
||||||
|
|
||||||
|
basic_auth {
|
||||||
|
username = "1257735"
|
||||||
|
password = sys.env("GCLOUD_RW_API_KEY")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self_monitoring_metrics "default" { }
|
63
lib/std.alloy
Normal file
63
lib/std.alloy
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
declare "fmt_repository_path" {
|
||||||
|
argument "repository" {
|
||||||
|
optional = true
|
||||||
|
comment = "The repository to use for url"
|
||||||
|
default = "https://git.kkarolis.lt/karolis/Alloy/"
|
||||||
|
}
|
||||||
|
|
||||||
|
argument "type" {
|
||||||
|
optional = true
|
||||||
|
comment = "Type of access to the repository, e.g. 'raw' for raw files"
|
||||||
|
default = "raw"
|
||||||
|
}
|
||||||
|
|
||||||
|
argument "branch" {
|
||||||
|
optional = true
|
||||||
|
comment = "The branch to use for the repository"
|
||||||
|
default = "main"
|
||||||
|
}
|
||||||
|
|
||||||
|
argument "path" {
|
||||||
|
optional = false
|
||||||
|
comment = "The path to the file in the repository"
|
||||||
|
default = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
export "url" {
|
||||||
|
value = string.format("%s%s/branch/%s/%s", argument.repository.value, argument.type.value, argument.branch.value, argument.path.value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
declare "load_host_config" {
|
||||||
|
argument "host" {
|
||||||
|
optional = true
|
||||||
|
comment = "The host to load the configuration for"
|
||||||
|
default = sys.env("GCLOUD_FM_COLLECTOR_ID")
|
||||||
|
}
|
||||||
|
|
||||||
|
argument "config_file" {
|
||||||
|
optional = false
|
||||||
|
comment = "The path to the configuration file"
|
||||||
|
}
|
||||||
|
|
||||||
|
argument "repository" {
|
||||||
|
optional = true
|
||||||
|
comment = "The repository to use for loading the configuration"
|
||||||
|
default = "https://git.kkarolis.lt/karolis/Alloy/"
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt_repository_path "config_file_url" {
|
||||||
|
repository = argument.repository.value
|
||||||
|
type = "raw"
|
||||||
|
branch = "main"
|
||||||
|
path = argument.config_file.value
|
||||||
|
}
|
||||||
|
|
||||||
|
remote.http "config_file" {
|
||||||
|
url = fmt_repository_path.config_file_url.url
|
||||||
|
}
|
||||||
|
|
||||||
|
export "config" {
|
||||||
|
value = encoding.from_yaml(remote.http.config_file.content)[argument.host.value]
|
||||||
|
}
|
||||||
|
}
|
14
test.alloy
Normal file
14
test.alloy
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
|
||||||
|
livedebugging {
|
||||||
|
enabled = true
|
||||||
|
}
|
||||||
|
|
||||||
|
import.file "libstd" {
|
||||||
|
filename = "lib/std.alloy"
|
||||||
|
}
|
||||||
|
|
||||||
|
libstd.load_host_config "load_config" {
|
||||||
|
host = "NAMU-PC"
|
||||||
|
config_file = "OpenWRT/config.yaml"
|
||||||
|
}
|
||||||
|
|
Reference in New Issue
Block a user