Friday, 27 February 2026

GCP Console log analysis — step-by-step (using Claude.ai inside IntelliJ)

 A practical, end-to-end guide showing how to find, query, and analyze Google Cloud logs in the GCP Console, and how to use an AI assistant (Claude) inside IntelliJ to accelerate queries, regex, and root-cause investigation.

Below: prerequisites → setup → hands-on analysis steps (Logs Explorer + gcloud) → exporting/alerts → using Claude in IntelliJ to speed things up → best practices & troubleshooting.


1) Quick overview & prerequisites

What we’ll use:

  • GCP Cloud Logging (Logs Explorer & Log Analytics) to view and query logs.

  • IntelliJ IDEA with:

    • Cloud Code for IntelliJ (recommended for GCP integration).

    • A Claude/Claude Code plugin (lets you chat with Claude inside your JetBrains IDE to produce queries, summarize log dumps, create regex, etc.). Several community and official plugins exist.

Prerequisites:

  1. GCP project with logs being produced (Compute Engine, Cloud Run, GKE, App Engine, etc.).

  2. gcloud CLI installed and authenticated (gcloud auth login or service account as needed).

  3. IntelliJ IDEA (2023.3+ recommended) with Cloud Code plugin and a Claude plugin installed.


2) Enable and inspect logs in GCP Console (step-by-step)

a) Open Logs Explorer

  1. In the Google Cloud Console, go to Logging → Logs Explorer. (This is the primary UI for searching and troubleshooting logs.)

b) Use the Logs Explorer query builder or enter a LogsQL filter

  • Example — show latest ERRORs from GCE instances:

resource.type="gce_instance"
severity>=ERROR
  • Or combine fields (for Cloud Run service my-service):

resource.type="cloud_run_revision"
resource.labels.service_name="my-service"
severity>=ERROR

Logs Explorer supports both the basic filter builder and Log Analytics/SQL-style queries for deeper analysis.

c) Save and pin useful queries

  • Save frequent searches as Saved Queries so teams can reuse them. (Logs Explorer UI supports saving filters for troubleshooting workflows.)


3) CLI quick-look: gcloud logging read

When you want quick command-line inspection:

# Read the last 50 ERROR entries from Compute Engine
gcloud logging read 'resource.type="gce_instance" severity>=ERROR' --limit=50 --project=my-project

Use --freshness or add timestamps in the filter to focus ranges. (GCP docs show how to form queries and use the CLI for reads and exports.)


4) Deeper analysis: Log Analytics & export to BigQuery

  • For aggregated analytics, use Log Analytics (Logs SQL) to run SQL queries over logs, create charts, and build dashboards. This is useful for patterns, percentiles, and grouping.

  • Export option: create a Logs sink to export to BigQuery (for long-term analysis / ML) or to Cloud Storage / Pub/Sub for pipeline processing. Use sinks when you need to run complex analysis or join logs with other datasets.


5) Typical log-analysis workflow (practical steps)

  1. Reproduce / identify time window — narrow the time range to when the incident happened.

  2. Start with high severity — filter severity>=ERROR / severity>=CRITICAL.

  3. Group by resource/service — add resource.labels filters.

  4. Expand to surrounding context — pick a trace id / request id from an error entry and search for it to see full request flow.

  5. Run Log Analytics — aggregate counts per minute to spot spikes (e.g., COUNT(*) BY minute(timestamp) in logs SQL).


6) Use IntelliJ + Claude to speed things up (concrete examples)

Why combine IntelliJ + Claude?

  • Claude inside IntelliJ can generate / refine Logs Explorer queries, produce regexes to extract fields, summarize large log excerpts, translate raw logs into human-readable root causes, or create templates for alerts. JetBrains plugins let you chat with Claude inside the IDE so you don't context-switch.

Example workflows inside IntelliJ

A. Paste a sample error log and ask Claude to summarize

  • Copy an error log snippet into Claude chat window (in the plugin panel) and ask:

    • “Summarize the likely root cause and list the fields (request id, user id, error code) with regexes to extract them.”

  • Claude returns a short summary plus regex patterns you can paste into Logs Explorer’s extraction field.

B. Ask Claude to generate a Logs Explorer filter

  • Prompt example:
    Create a Logs Explorer query to find ERROR logs in Cloud Run service "payments" in the last 2 hours that contain "timeout" or "deadline exceeded".

  • Claude will produce a LogsQL/filter expression you can copy into Logs Explorer.

C. Convert free-text problem statement → query

  • Tell Claude: “I see intermittent 503s between 2026-02-20 03:00 and 04:30 UTC for service X. Give me a set of diagnostic queries to run (3 priorities).”

  • Claude returns prioritized queries: spike detection, trace id extraction, and container restart correlation.

D. Create alerting rule skeletons

  • Ask Claude to draft the alerts (example conditions, thresholds, incident title, runbook link). Use the output to create an alert policy in Cloud Monitoring.

(Plugins like Claude Code / community IntelliClaude let you run these conversations directly in the IDE and keep project context available to the assistant.)


7) Sample Logs Explorer queries & patterns

Find 5xx errors in Cloud Run for a service

resource.type="cloud_run_revision"
resource.labels.service_name="payments"
httpRequest.status>=500
timestamp >= "2026-02-26T00:00:00Z"

Search by trace / request id

Many apps attach trace or requestId fields. To find all entries for a request:

jsonPayload.requestId="abcd-1234-xyz"

Count errors per minute (Log Analytics / Logs SQL)

Use Logs SQL in Log Analytics for quick aggregation (example syntax varies—see docs):

SELECT
TIMESTAMP_TRUNC(timestamp, MINUTE) AS minute,
COUNT(*) AS error_count
FROM
`logs`
WHERE
severity >= "ERROR"
GROUP BY minute
ORDER BY minute DESC

For precise Log SQL syntax and examples, consult GCP Log Analytics docs.


8) Exporting logs and alerts (short how-to)

Export to BigQuery (sink)

  1. In Cloud Console → Logging → Logs Router → Create Sink.

  2. Choose BigQuery dataset as the destination and an appropriate filter for only the logs you need.

  3. Use BigQuery to run historical analytics, ML models, or join with business data.

Alerting basics

  • Create a Log-based metric (count of error log entries), then build an Alerting policy in Cloud Monitoring on that metric (thresholds, notification channels). This gives reliable automated alerts instead of manual checks.


9) Best practices & tips

  • Structured logging: log JSON with standardized fields (requestId, userId, service, span/trace) to make queries and grouping trivial.

  • Use labels: include service and environment labels to quickly slice logs (prod/staging).

  • Limit noise: exclude low-value logs at ingestion or use sinks to route verbose logs elsewhere.

  • Retention & cost: exporting to BigQuery and long retention costs money — design retention policies accordingly.


10) Troubleshooting & common pitfalls

  • No logs appearing? Check IAM permissions (Viewer / Logs Viewer) and ensure log ingestion is enabled for the service. Also confirm gcloud is pointed to the correct project.

  • Fields not searchable? Make sure logs are structured and fields are present in the jsonPayload or labels; use extraction if needed.

  • Large result sets slow the UI: narrow time windows or sample using --limit in gcloud logging read.


11) Further reading & plugin links

  • Cloud Logging docs (Logs Explorer & Log Analytics).

  • Cloud Code for IntelliJ (install / setup).

  • Claude / Claude Code JetBrains integrations and IntelliJ plugins (examples & marketplace).


12) Quick checklist to get started right now

  1. Enable logs for your service in GCP.

  2. Install Cloud Code and a Claude plugin in IntelliJ.

  3. Run a simple query in Logs Explorer (severity>=ERROR) and pick a sample error entry.

  4. Paste that sample into Claude in IntelliJ and ask: “Summarize this error and give me a Logs Explorer query to find related entries.”

  5. Iterate: use the generated query, export to BigQuery if you need long-term analysis, and create log-based metrics for alerting. 

No comments:

Post a Comment

Note: only a member of this blog may post a comment.