Skip to main content
This quickstart shows how to instrument your LangChain application using OpenLLMetry and Scorecard for observability, debugging, and evaluation.
Looking for general tracing guidance? Check out the Tracing Quickstart for an overview of tracing concepts and alternative integration methods.

Steps

1

Install dependencies

Install the Traceloop SDK and the LangChain instrumentation package.
pip install traceloop-sdk opentelemetry-instrumentation-langchain
2

Set up environment variables

Configure the Traceloop SDK to send traces to Scorecard. Get your Scorecard API key from Settings.
export TRACELOOP_API_KEY="<your_scorecard_api_key>"
export TRACELOOP_BASE_URL="https://tracing.scorecard.io/otel"
export SCORECARD_PROJECT_ID="<your-project-id>"
Replace <your_scorecard_api_key> with your actual Scorecard API key (starts with ak_).
3

Initialize tracing

Initialize the Traceloop SDK with LangChain instrumentation before importing LangChain modules.
Import order matters! You must initialize Traceloop before importing any LangChain modules to ensure all calls are properly instrumented.
import os
from traceloop.sdk import Traceloop
from traceloop.sdk.instruments import Instruments

# Set scorecard.project_id to route traces to a specific project (defaults to oldest project)
Traceloop.init(
    disable_batch=True,
    instruments={Instruments.LANGCHAIN},
    resource_attributes={
        "scorecard.project_id": os.getenv("SCORECARD_PROJECT_ID")
    }
)

# Now import your LangChain modules
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
4

Run your LangChain application

With tracing initialized, run your LangChain application. All LLM calls, chain executions, and agent actions are automatically traced.Here’s a full example:
example.py
import os
from traceloop.sdk import Traceloop
from traceloop.sdk.instruments import Instruments

# Set scorecard.project_id to route traces to a specific project (defaults to oldest project)
Traceloop.init(
    disable_batch=True,
    instruments={Instruments.LANGCHAIN},
    resource_attributes={
        "scorecard.project_id": os.getenv("SCORECARD_PROJECT_ID")
    }
)

# Then import LangChain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Create a simple chain
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model

# Run the chain - this will be traced
response = chain.invoke({"input": "What is the capital of France?"})
print(response.content)
You may see Failed to export batch warnings in the console. These can be safely ignored - your traces are still being captured and sent to Scorecard successfully.
5

View traces in Scorecard

Navigate to the Records page in Scorecard to see your LangChain traces.
It may take 1-2 minutes for traces to appear on the Records page.
Records page showing LangChain tracesRecords page showing LangChain traces
Click on any record to view the full trace details, including chain execution, LLM calls, and token usage.
Trace details viewTrace details view

What Gets Traced

OpenLLMetry automatically captures comprehensive telemetry from your LangChain applications. Scorecard includes enhanced LangChain/Traceloop adapter support for better trace visualization:
Trace DataDescription
LLM CallsEvery LLM invocation with full prompt and completion, including model information and token counts
ChainsChain executions with inputs, outputs, and intermediate steps
AgentsAgent reasoning steps, tool selections, and action outputs
ToolsTool invocations with proper tool call sections (not prompt/completion)
RetrieversDocument retrieval operations and retrieved content
Token UsageInput, output, and total token counts per LLM call extracted from gen_ai.* attributes
ErrorsAny failures with full error context and stack traces

Enhanced Span Classification

Scorecard’s LangChain adapter recognizes both OpenInference (openinference.*) and Traceloop (traceloop.*) attribute formats:
  • Workflow spans (traceloop.span.kind: workflow) - High-level application flows
  • Task spans (traceloop.span.kind: task) - Individual processing steps
  • Tool spans (traceloop.span.kind: tool) - Tool invocations with dedicated Tool Call sections
  • LLM spans - Model calls with extracted model names, token counts, and costs

Tool Visualization

Common LangChain tools receive appropriate coloring and categorization:
  • Retrievers (retriever, vectorstore, search)
  • SQL tools (sql, database)
  • Web search (search, google, bing)
  • Custom tools - Automatically detected from span names

Next Steps