Files
Alloy/.github/instructions/alloy.instructions.md
Karolis2011 fa57353a6e Add Grafana Alloy configuration files and update examples
- Introduced detailed configuration guidelines in alloy.instructions.md
- Added general instructions for project structure in general.instructions.md
- Created config.yaml for NAMU-PC with target hostnames
- Implemented example.alloy and openwrt.alloy for service discovery and scraping
- Added alloy_seed.json for initial configuration state
- Developed demo.alloy for comprehensive monitoring setup
- Established std.alloy for repository path formatting and host configuration loading
- Updated test.alloy to utilize new host configuration loading
2025-08-01 16:25:31 +03:00

114 lines
3.4 KiB
Markdown

---
applyTo: '**/*.alloy'
---
# Grafana Alloy Configuration Guidelines
## Component Naming Conventions
### Discovery Components
- HTTP discovery: `{service}_exporter_dynamic`
- Relabel discovery: `{service}_exporter_with_cluster`
### Scrape Jobs
- Component name: `metrics_integrations_integrations_{service}`
- Job name: `integrations/{service}`
### Remote Write
- Component name: `metrics_service`
- Always forward to Grafana Cloud endpoint
## Configuration Pipeline Pattern
Every monitoring target must follow this 3-stage pipeline:
```alloy
// 1. Discovery Stage
discovery.http "{service}_exporter_dynamic" {
url = "http://{service}-{env}.{domain}:8765/sd/prometheus/sd-config?service={service}-exporter"
}
// 2. Labeling Stage
discovery.relabel "{service}_exporter_with_cluster" {
targets = discovery.http.{service}_exporter_dynamic.targets
rule {
target_label = "{service}_cluster" // or appropriate cluster label
replacement = "{environment_id}" // hardcoded environment identifier
}
}
// 3. Scraping Stage
prometheus.scrape "metrics_integrations_integrations_{service}" {
targets = discovery.relabel.{service}_exporter_with_cluster.output
forward_to = [prometheus.remote_write.metrics_service.receiver]
job_name = "integrations/{service}"
}
```
## Environment Configuration
### Service Discovery URLs
- Pattern: `http://{service}-{env}.{domain}:8765/sd/prometheus/sd-config?service={service}-exporter`
- Port 8765 is standard for HTTP SD endpoints
- Always use query parameter `service={service}-exporter`
### Authentication & Secrets
- Use `sys.env("VARIABLE_NAME")` for sensitive data
- Standard variables:
- `GCLOUD_RW_API_KEY` for Grafana Cloud API key
- Never hardcode passwords or API keys
### Cluster Labeling
- Always add cluster/environment labels via `discovery.relabel`
- Use descriptive cluster names (e.g., "gr7", "prod", "staging")
- Cluster labels help with multi-environment visibility
## Remote Write Configuration
Standard remote write configuration:
```alloy
prometheus.remote_write "metrics_service" {
endpoint {
url = "https://prometheus-prod-24-prod-eu-west-2.grafana.net/api/prom/push"
basic_auth {
username = "1257735" // Grafana instance ID
password = sys.env("GCLOUD_RW_API_KEY")
}
}
}
```
## Code Style Guidelines
### Comments
- Add descriptive comments above each component
- Explain the purpose, not the syntax
- Use format: `// {Action description}`
### Formatting
- Use tabs for indentation
- One empty line between components
- Align parameters vertically when reasonable
### Component References
- Always reference by full component path: `discovery.http.component_name.targets`
- Use descriptive variable names in targets/forward_to chains
## Error Prevention
### Common Mistakes to Avoid
- Don't hardcode service discovery URLs without environment variables
- Don't skip the relabel stage - always add cluster labels
- Don't use generic job names - follow `integrations/{service}` pattern
- Don't forget to forward metrics to remote write endpoint
### Validation Checklist
- [ ] Service discovery URL uses correct pattern
- [ ] Relabel adds appropriate cluster/environment labels
- [ ] Scrape job follows naming convention
- [ ] Metrics are forwarded to remote write
- [ ] No hardcoded secrets
- [ ] Comments explain component purpose