franz.graphtalker package

Subpackages

Submodules

franz.graphtalker.client module

GraphTalker Python client for the eval server.

Provides a high-level Pythonic API for interacting with AllegroGraph via Claude AI, hiding the underlying Lisp expressions completely.

class franz.graphtalker.client.GraphTalkerClient(host: str = 'localhost', port: int = 8080, api_key: str | None = None, timeout: float = 30.0, default_query_timeout: float = 300.0, username: str | None = None, base_url: str | None = None, auth: tuple | None = None)

Bases: object

Python client for GraphTalker eval server.

Provides a high-level Pythonic API for interacting with AllegroGraph via Claude AI, hiding the underlying Lisp expressions completely.

For advanced users, the eval() method provides direct access to evaluate arbitrary Lisp expressions.

Args:

host: Eval server hostname. Ignored when base_url is provided. port: Eval server port. Ignored when base_url is provided. api_key: API key for authentication (None for no auth). timeout: Default request timeout in seconds. default_query_timeout: Default timeout for Claude query methods

(claude_query, claude_ask, generate_sparql). Defaults to 300 seconds (5 minutes) since Claude queries involve multiple tool-calling iterations.

username: Optional username tag for session management.

When set, save_session() and list_sessions() use this as the default owner/filter. Not a security boundary.

base_url: Full base URL for the GraphTalker eval server. When

provided it is used directly, overriding the http://host:port default. Use this when connecting through an AllegroGraph proxy, e.g. "http://ag-host:10035/graphtalker/8080".

auth: (username, password) tuple for HTTP Basic Auth. Used when

connecting through an AllegroGraph proxy, which requires the same credentials as the AG server itself.

Example — direct connection:

client = GraphTalkerClient(port=8080, api_key="my-key")
client.connect("http", "localhost", 10035, "", "hr-analytics", "test", "xyzzy")
result = client.claude_query("Show me all the classes")
print(result.answer)

Example — AllegroGraph proxy connection:

client = GraphTalkerClient(
    base_url="http://localhost:10035/graphtalker/8080",
    api_key="my-key",
)
client.connect("http", "localhost", 10035, "", "hr-analytics", "test", "xyzzy")
result = client.claude_query("Show me all the classes")
print(result.answer)
__init__(host: str = 'localhost', port: int = 8080, api_key: str | None = None, timeout: float = 30.0, default_query_timeout: float = 300.0, username: str | None = None, base_url: str | None = None, auth: tuple | None = None)
abort_query() bool

Abort the currently running query on the eval server.

Call this from a different thread while claude_query() or generate_sparql() is blocking in another thread. The blocked call will raise QueryAbortedError.

Returns:

True if a query was aborted, False if no query was running.

Raises:

ConnectionError: Cannot reach the eval server.

Example:

import threading
# Thread 1: long-running query
def worker():
    try:
        client.claude_query("complex question...")
    except QueryAbortedError:
        print("Query was aborted")

t = threading.Thread(target=worker)
t.start()

# Thread 2: abort after some time
time.sleep(5)
client.abort_query()
t.join()
claude_ask(question: str, *, max_iterations: int = 10, timeout: float | None = None) QueryResult

Ask Claude a fresh question (always starts new conversation).

Convenience wrapper equivalent to claude_query(question, continue_conversation=False).

Args:

question: Natural language question. max_iterations: Maximum tool-calling iterations. timeout: Request timeout in seconds.

Returns:

QueryResult with answer, stdout, and raw_result.

claude_query(question: str, *, continue_conversation: bool = True, max_iterations: int = 10, timeout: float | None = None) QueryResult

Ask Claude a question with full tool access.

By default continues the previous conversation so Claude can reuse schema information and context from prior questions.

Args:

question: Natural language question about your data. continue_conversation: If True (default), continue existing

conversation. If False, start fresh.

max_iterations: Maximum tool-calling iterations (default: 10). timeout: Request timeout in seconds.

Returns:

QueryResult with answer, stdout, and raw_result.

Example:

result = client.claude_query("Find all employees in engineering")
print(result.answer)
# Follow-up (continues conversation by default):
result = client.claude_query("Now show their salaries")
clear_conversation() None

Clear conversation history while preserving all configuration.

Preserves AllegroGraph connection, API keys, query library settings, prompt caching configuration, and SHACL cache.

close()

Stop the GraphTalker server and close the underlying HTTP session.

condense_conversation(*, keep_recent: int = 2, verbose: bool = True) int | None

Condense conversation to reduce context size.

Keeps recent interactions intact and prunes intermediate tool calls from older interactions.

Args:

keep_recent: Number of recent interactions to keep intact. verbose: Print condensation details.

Returns:

Number of messages removed, or None if nothing to condense.

connect(protocol: str, host: str, port: int, catalog: str, repository: str, user: str, password: str) str

Initialize AllegroGraph connection.

Args:

protocol: “http” or “https”. host: AllegroGraph server hostname. port: AllegroGraph server port. catalog: Catalog name (”” or “/” for root catalog). repository: Repository name. user: AllegroGraph username. password: AllegroGraph password.

Returns:

Connection confirmation message.

delete_query(*, query_uri: str | None = None, query_title: str | None = None, repository: str | None = None) str

Delete a query from the query library.

Provide either query_uri or query_title (with repository).

Args:

query_uri: URI of the query to delete. query_title: Title of the query to delete. repository: Repository name (used with query_title).

Returns:

Deletion confirmation.

delete_session(session_id: str) None

Delete a saved session.

Args:

session_id: The session ID to delete.

eval(expression: str, *, timeout: float | None = None) EvalResult

Evaluate a Lisp expression on the eval server.

This is the low-level escape hatch. The expression is sent as-is and evaluated in the :agraph-claude-tools package.

Args:

expression: A Lisp expression as a string. timeout: Request timeout in seconds (overrides default).

Returns:

EvalResult with stdout, result, error, and parsed fields.

Raises:

ConnectionError: Cannot reach the eval server. AuthenticationError: Invalid or missing API key. EvalError: Lisp evaluation raised an error. TimeoutError: Request timed out. ServerError: Unexpected HTTP error.

execute_tool(tool_name: str, **params: Any) str

Execute a named tool directly (bypasses Claude).

This is a lower-level API for calling any of the 30+ tools directly. Parameter values are automatically converted to Lisp representations.

Args:

tool_name: Tool name (e.g., “sparql_query”, “get_shacl”). **params: Tool parameters as keyword arguments.

Returns:

Tool result as a string.

Example:

result = client.execute_tool("get_shacl")
result = client.execute_tool(
    "sparql_query",
    query="SELECT ?s WHERE { ?s ?p ?o } LIMIT 5",
)
generate_sparql(question: str, *, continue_conversation: bool = True, max_iterations: int = 10, timeout: float | None = None) str

Ask Claude to generate a SPARQL query for a natural language question.

Claude will fetch the schema, examine example queries, and iteratively test the query before returning the final version.

Args:

question: Natural language question to convert to SPARQL. continue_conversation: If True (default), continue conversation. max_iterations: Maximum tool-calling iterations. timeout: Request timeout in seconds.

Returns:

The generated SPARQL query string.

Example:

sparql = client.generate_sparql("How many products were sold last month?")
print(sparql)
get_pending_visualization(ref_id: str) PendingVisualization

Fetch a cached visualization config by its reference ID.

After Claude generates a chart and calls prepare_visualization, the config is cached server-side. Use this method to retrieve it for rendering without storing it permanently.

Also works for map configs from build_map_visualization.

Args:

ref_id: Reference ID (e.g. “viz-config-123-456” or “map-config-123-456”).

Returns:

PendingVisualization with config, type, description, and summary.

Raises:

EvalError: If the reference ID is not found (expired or invalid).

Example:

result = client.claude_query("Show product distribution as a pie chart")
# Extract ref_id from result.answer or result.stdout
viz = client.get_pending_visualization("viz-config-123-456")
print(viz.viz_type)  # "pie_chart"
print(viz.config)   # Chart.js config dict
get_token_cost_stats() TokenCostStats

Get structured token usage, cost, and context statistics.

Returns all token counts, estimated costs (USD), cache savings, and context window usage in a single structured object.

Returns:

TokenCostStats with all token/cost/context data.

Example:

stats = client.get_token_cost_stats()
print(f"Total cost: ${stats.total_cost:.4f}")
print(f"Context: {stats.context_percentage:.1f}% used")
if stats.cache_savings > 0:
    print(f"Cache saved: ${stats.cache_savings:.4f}")

# Per-query cost tracking:
before = client.get_token_cost_stats()
result = client.claude_query("Find all employees...")
after = client.get_token_cost_stats()
query_cost = after.total_cost - before.total_cost
get_variable(variable_name: str) Any

Get the value of a Lisp configuration variable.

Args:
variable_name: Variable name including asterisks

(e.g., “claude-model”).

Returns:

The parsed Python value.

get_visualizations(query_title: str, *, repository: str | None = None) List[Visualization]

Get all stored visualizations for a query from the query library.

Args:

query_title: Title of the query to get visualizations for. repository: Repository name filter (optional).

Returns:

List of Visualization objects.

Example:

vizs = client.get_visualizations("Product Distribution")
for v in vizs:
    print(f"{v.viz_type}: {v.description}")
health_check() bool

Check if the eval server is running and healthy.

Returns:

True if server is healthy, False otherwise.

Raises:

ConnectionError: Cannot reach the server.

list_all_queries(repository: str | None = None) str

List all queries in the query library.

Args:

repository: Repository name filter (optional).

Returns:

All stored queries as a string.

list_sessions(*, username: ~typing.Any = <object object>, repository: str | None = None) List[SessionInfo]

List saved sessions, optionally filtered by username and repository.

By default, filters to the client’s username (if set). Pass username=None explicitly to see all users’ sessions.

Args:
username: Filter by username. Defaults to self.username.

Pass None to list all sessions regardless of owner.

repository: Filter by repository name (optional).

Returns:

List of SessionInfo objects sorted by most-recently modified.

Example:

sessions = client.list_sessions()
for s in sessions:
    print(f"{s.title} ({s.message_count} messages)")
load_config(config_path: str) None

Load configuration from a JSON file on the server.

Args:

config_path: Absolute path to config.json on the server.

reset_context_stats() None

Reset all token, cost, and cache statistics counters.

restore_session(session_id: str) SessionInfo

Restore a previously saved session into the conversation history.

Args:

session_id: The session ID returned by save_session().

Returns:

SessionInfo with metadata about the restored session.

Raises:

EvalError: If the session is not found.

save_session(title: str, *, session_id: str | None = None) str

Save the current conversation as a named session.

Args:

title: Human-readable title for the session. session_id: Optional ID to update an existing session.

If None, a new session ID is generated.

Returns:

The session ID (can be used for restore/delete).

Example:

sid = client.save_session("Employee analysis Q3")
# Later:
client.restore_session(sid)
search_queries(search: str, repository: str | None = None) str

Search the query library by natural language description.

Args:

search: Search terms. repository: Repository name filter (optional).

Returns:

Matching queries as a string.

set_api_key(api_key: str) None

Set the Anthropic API key on the server.

Args:

api_key: Anthropic API key string.

set_max_iterations(n: int) int

Set the default maximum iterations for Claude queries.

Args:

n: Number of iterations (1-100).

Returns:

The new value.

set_meta_schema_repository(repo_string: str) bool

Set the meta-schema repository for the semantic layer.

Args:

repo_string: “catalog:repository” or just repository name.

Returns:

True if repository exists and was set, False otherwise.

set_model(model: str) None

Set the Claude model to use.

Args:

model: Model identifier (e.g., “claude-sonnet-4-5-20250929”).

set_prompt_caching(enabled: bool) None

Enable or disable prompt caching.

Args:

enabled: True to enable, False to disable.

set_query_library(repo_string: str) bool

Set the query library repository location.

Args:
repo_string: “catalog:repository”, “/:repo” for root catalog,

or just “repo” for current catalog.

Returns:

True if repository exists and was set, False otherwise.

set_repository(repo_string: str) str

Switch the active AllegroGraph repository.

Args:
repo_string: Repository in “catalog:repository” format,

or just “repository” for root catalog.

Returns:

Confirmation message.

set_variable(variable_name: str, value: Any) None

Set a Lisp configuration variable.

Args:
variable_name: Variable name including asterisks

(e.g., “max-tokens”).

value: Python value (str, int, float, bool, or None).

show_available_tools() str

List all available tools with descriptions.

Returns:

Formatted tool list.

show_conversation_history() str

Display the current conversation history.

Returns:

Formatted conversation history.

sparql_query(query: str) str

Execute a SPARQL SELECT/CONSTRUCT/ASK/DESCRIBE query directly.

Bypasses Claude – runs the query directly against AllegroGraph.

Args:

query: SPARQL query string.

Returns:

Query results as a string.

sparql_update(update: str) str

Execute a SPARQL UPDATE/INSERT/DELETE directly.

Bypasses Claude – runs the update directly against AllegroGraph.

Args:

update: SPARQL update string.

Returns:

Result confirmation string.

start_conversation() None

Start a new conversation, clearing previous history.

stop() None

Shut down the GraphTalker eval server process.

Calls (stop-eval-server) on the server, which unpublishes all HTTP endpoints and terminates the process. After this call the client is no longer usable.

Raises:

ConnectionError: Cannot reach the eval server. AuthenticationError: Invalid or missing API key.

store_query(title: str, description: str, sparql_query: str, repository: str | None = None) str

Store a SPARQL query in the query library.

Args:

title: Query title for identification. description: Human-readable description for search/discovery. sparql_query: The SPARQL query string. repository: Repository name override (defaults to current).

Returns:

Result confirmation.

store_visualization(query_title: str, viz_config: str, repository: str, *, viz_type: str | None = None, description: str | None = None, summary: str | None = None) str

Store a visualization for a query in the query library.

The viz_config can be a reference ID from prepare_visualization or build_map_visualization, in which case viz_type, description, and summary are auto-filled from the cached metadata.

Args:

query_title: Title of the query this visualization belongs to. viz_config: Reference ID string (preferred) or raw JSON config. repository: Repository name. viz_type: Visualization type (auto-filled when using reference ID). description: Description (auto-filled when using reference ID). summary: Markdown summary (auto-filled when using reference ID).

Returns:

Confirmation message with visualization ID.

Example:

# Using reference ID (preferred):
client.store_visualization("Product Distribution",
                           "viz-config-123-456", "my-repo")

# Using raw config:
client.store_visualization("Product Distribution",
                           '{"type":"pie",...}', "my-repo",
                           viz_type="pie_chart",
                           description="Product breakdown")
test_connection() bool

Test the AllegroGraph connection.

Returns:

True if connected successfully.

property username: str | None

Get the current username tag used for session management.

franz.graphtalker.exceptions module

Exception hierarchy for the GraphTalker client.

exception franz.graphtalker.exceptions.AuthenticationError

Bases: GraphTalkerError

Authentication failed (401 from eval server).

exception franz.graphtalker.exceptions.ConnectionError

Bases: GraphTalkerError

Failed to connect to the eval server.

exception franz.graphtalker.exceptions.EvalError(message: str, stdout: str = '')

Bases: GraphTalkerError

Lisp evaluation raised an error.

Attributes:

lisp_error: The error message from Lisp. stdout: Any output produced before the error.

__init__(message: str, stdout: str = '')
exception franz.graphtalker.exceptions.GraphTalkerError

Bases: Exception

Base exception for all GraphTalker errors.

exception franz.graphtalker.exceptions.QueryAbortedError(message: str, stdout: str = '')

Bases: EvalError

Query was aborted by the user via abort_query().

Subclass of EvalError so existing except EvalError handlers still catch it, but callers can distinguish aborts if needed.

exception franz.graphtalker.exceptions.ServerError(status_code: int, body: str)

Bases: GraphTalkerError

Eval server returned an unexpected HTTP error.

Attributes:

status_code: The HTTP status code. body: The response body.

__init__(status_code: int, body: str)
exception franz.graphtalker.exceptions.TimeoutError

Bases: GraphTalkerError

Request to eval server timed out.

franz.graphtalker.models module

Dataclasses for structured results from the GraphTalker eval server.

class franz.graphtalker.models.EvalResult(stdout: str, result: str, error: str | None)

Bases: object

Raw result from the eval server.

Attributes:

stdout: All printed output from the Lisp side. result: The return value string (Lisp ~S format). error: Error message if evaluation failed, None otherwise. parsed: The result string parsed into a Python value.

__init__(stdout: str, result: str, error: str | None) None
error: str | None
parsed: Any = None
result: str
stdout: str
class franz.graphtalker.models.PendingVisualization(ref_id: str, viz_type: str | None = None, config: dict | None = None, description: str | None = None, summary: str | None = None)

Bases: object

A visualization config cached server-side, not yet stored permanently.

Created by Claude calling prepare_visualization (charts/network graphs) or build_map_visualization (maps). Retrievable by reference ID.

Attributes:

ref_id: Reference ID (e.g. “viz-config-123-456” or “map-config-123-456”). viz_type: Visualization type (e.g. “pie_chart”, “map”, “network_graph”). config: The visualization config as a parsed dict (Chart.js, GeoJSON, etc.). description: Description of what the visualization shows. summary: Optional markdown narrative summary.

__init__(ref_id: str, viz_type: str | None = None, config: dict | None = None, description: str | None = None, summary: str | None = None) None
config: dict | None = None
description: str | None = None
ref_id: str
summary: str | None = None
viz_type: str | None = None
class franz.graphtalker.models.QueryResult(answer: str, stdout: str, raw_result: str)

Bases: object

Result from a Claude query (claude_query, claude_ask, generate_sparql).

Attributes:

answer: Claude’s final answer text. stdout: Full conversation output (iterations, tool calls, etc.). raw_result: The raw Lisp result string.

__init__(answer: str, stdout: str, raw_result: str) None
answer: str
raw_result: str
stdout: str
class franz.graphtalker.models.SessionInfo(session_id: str, title: str, username: str | None = None, repository: str = '', message_count: int = 0, created: str | None = None, modified: str | None = None)

Bases: object

Information about a saved conversation session.

Attributes:

session_id: Unique session identifier (fragment after # in URI). title: Human-readable session title. username: Owner username tag (None if not set). repository: Repository the session was saved against. message_count: Number of messages in the saved conversation. created: ISO 8601 creation timestamp. modified: ISO 8601 last-modified timestamp.

__init__(session_id: str, title: str, username: str | None = None, repository: str = '', message_count: int = 0, created: str | None = None, modified: str | None = None) None
created: str | None = None
message_count: int = 0
modified: str | None = None
repository: str = ''
session_id: str
title: str
username: str | None = None
class franz.graphtalker.models.TokenCostStats(input_tokens: int = 0, output_tokens: int = 0, cache_write_tokens: int = 0, cache_read_tokens: int = 0, total_tokens: int = 0, input_cost: float = 0.0, output_cost: float = 0.0, cache_write_cost: float = 0.0, cache_read_cost: float = 0.0, total_cost: float = 0.0, cost_without_cache: float = 0.0, cache_savings: float = 0.0, cache_savings_percent: float = 0.0, context_tokens: int = 0, context_percentage: float = 0.0, context_limit: int = 200000, message_count: int = 0)

Bases: object

Token usage, cost, and context statistics from get_token_cost_stats().

Provides a single structured view of all token counts, estimated costs, cache savings, and context window usage for the current session.

Attributes:

input_tokens: Total input tokens (cumulative). output_tokens: Total output tokens (cumulative). cache_write_tokens: Tokens used to create cache entries. cache_read_tokens: Tokens read from cache. total_tokens: Sum of all token categories. input_cost: Estimated cost for input tokens (USD). output_cost: Estimated cost for output tokens (USD). cache_write_cost: Estimated cost for cache writes (USD). cache_read_cost: Estimated cost for cache reads (USD). total_cost: Total estimated cost (USD). cost_without_cache: What it would have cost without caching (USD). cache_savings: Money saved by caching (USD). cache_savings_percent: Cache savings as percentage. context_tokens: Current conversation size (tokens). context_percentage: Current context window usage as percentage. context_limit: Anthropic’s context limit (200000). message_count: Number of messages in conversation.

__init__(input_tokens: int = 0, output_tokens: int = 0, cache_write_tokens: int = 0, cache_read_tokens: int = 0, total_tokens: int = 0, input_cost: float = 0.0, output_cost: float = 0.0, cache_write_cost: float = 0.0, cache_read_cost: float = 0.0, total_cost: float = 0.0, cost_without_cache: float = 0.0, cache_savings: float = 0.0, cache_savings_percent: float = 0.0, context_tokens: int = 0, context_percentage: float = 0.0, context_limit: int = 200000, message_count: int = 0) None
cache_read_cost: float = 0.0
cache_read_tokens: int = 0
cache_savings: float = 0.0
cache_savings_percent: float = 0.0
cache_write_cost: float = 0.0
cache_write_tokens: int = 0
context_limit: int = 200000
context_percentage: float = 0.0
context_tokens: int = 0
cost_without_cache: float = 0.0
input_cost: float = 0.0
input_tokens: int = 0
message_count: int = 0
output_cost: float = 0.0
output_tokens: int = 0
total_cost: float = 0.0
total_tokens: int = 0
class franz.graphtalker.models.Visualization(viz_id: str, viz_type: str, config: str, description: str = '', summary: str | None = None, created: str | None = None)

Bases: object

A visualization stored permanently in the query library.

Attributes:

viz_id: Visualization URI in the query library. viz_type: Visualization type (e.g. “bar_chart”, “pie_chart”). config: The visualization config as a string (JSON). description: Description of what the visualization shows. summary: Optional markdown narrative summary. created: ISO 8601 creation timestamp.

__init__(viz_id: str, viz_type: str, config: str, description: str = '', summary: str | None = None, created: str | None = None) None
config: str
created: str | None = None
description: str = ''
summary: str | None = None
viz_id: str
viz_type: str

Module contents

GraphTalker - Python client for GraphTalker eval server.

A high-level Pythonic API for interacting with AllegroGraph RDF databases via Claude AI, powered by the GraphTalker Common Lisp system.

Example:

from graphtalker import GraphTalkerClient

with GraphTalkerClient(port=8080, api_key="my-key") as client:
    client.connect("http", "localhost", 10035, "", "hr-analytics", "test", "xyzzy")
    result = client.claude_query("Show me all the classes")
    print(result.answer)
exception franz.graphtalker.AuthenticationError

Bases: GraphTalkerError

Authentication failed (401 from eval server).

exception franz.graphtalker.ConnectionError

Bases: GraphTalkerError

Failed to connect to the eval server.

exception franz.graphtalker.EvalError(message: str, stdout: str = '')

Bases: GraphTalkerError

Lisp evaluation raised an error.

Attributes:

lisp_error: The error message from Lisp. stdout: Any output produced before the error.

__init__(message: str, stdout: str = '')
class franz.graphtalker.EvalResult(stdout: str, result: str, error: str | None)

Bases: object

Raw result from the eval server.

Attributes:

stdout: All printed output from the Lisp side. result: The return value string (Lisp ~S format). error: Error message if evaluation failed, None otherwise. parsed: The result string parsed into a Python value.

__init__(stdout: str, result: str, error: str | None) None
error: str | None
parsed: Any = None
result: str
stdout: str
class franz.graphtalker.GraphTalkerClient(host: str = 'localhost', port: int = 8080, api_key: str | None = None, timeout: float = 30.0, default_query_timeout: float = 300.0, username: str | None = None, base_url: str | None = None, auth: tuple | None = None)

Bases: object

Python client for GraphTalker eval server.

Provides a high-level Pythonic API for interacting with AllegroGraph via Claude AI, hiding the underlying Lisp expressions completely.

For advanced users, the eval() method provides direct access to evaluate arbitrary Lisp expressions.

Args:

host: Eval server hostname. Ignored when base_url is provided. port: Eval server port. Ignored when base_url is provided. api_key: API key for authentication (None for no auth). timeout: Default request timeout in seconds. default_query_timeout: Default timeout for Claude query methods

(claude_query, claude_ask, generate_sparql). Defaults to 300 seconds (5 minutes) since Claude queries involve multiple tool-calling iterations.

username: Optional username tag for session management.

When set, save_session() and list_sessions() use this as the default owner/filter. Not a security boundary.

base_url: Full base URL for the GraphTalker eval server. When

provided it is used directly, overriding the http://host:port default. Use this when connecting through an AllegroGraph proxy, e.g. "http://ag-host:10035/graphtalker/8080".

auth: (username, password) tuple for HTTP Basic Auth. Used when

connecting through an AllegroGraph proxy, which requires the same credentials as the AG server itself.

Example — direct connection:

client = GraphTalkerClient(port=8080, api_key="my-key")
client.connect("http", "localhost", 10035, "", "hr-analytics", "test", "xyzzy")
result = client.claude_query("Show me all the classes")
print(result.answer)

Example — AllegroGraph proxy connection:

client = GraphTalkerClient(
    base_url="http://localhost:10035/graphtalker/8080",
    api_key="my-key",
)
client.connect("http", "localhost", 10035, "", "hr-analytics", "test", "xyzzy")
result = client.claude_query("Show me all the classes")
print(result.answer)
__init__(host: str = 'localhost', port: int = 8080, api_key: str | None = None, timeout: float = 30.0, default_query_timeout: float = 300.0, username: str | None = None, base_url: str | None = None, auth: tuple | None = None)
abort_query() bool

Abort the currently running query on the eval server.

Call this from a different thread while claude_query() or generate_sparql() is blocking in another thread. The blocked call will raise QueryAbortedError.

Returns:

True if a query was aborted, False if no query was running.

Raises:

ConnectionError: Cannot reach the eval server.

Example:

import threading
# Thread 1: long-running query
def worker():
    try:
        client.claude_query("complex question...")
    except QueryAbortedError:
        print("Query was aborted")

t = threading.Thread(target=worker)
t.start()

# Thread 2: abort after some time
time.sleep(5)
client.abort_query()
t.join()
claude_ask(question: str, *, max_iterations: int = 10, timeout: float | None = None) QueryResult

Ask Claude a fresh question (always starts new conversation).

Convenience wrapper equivalent to claude_query(question, continue_conversation=False).

Args:

question: Natural language question. max_iterations: Maximum tool-calling iterations. timeout: Request timeout in seconds.

Returns:

QueryResult with answer, stdout, and raw_result.

claude_query(question: str, *, continue_conversation: bool = True, max_iterations: int = 10, timeout: float | None = None) QueryResult

Ask Claude a question with full tool access.

By default continues the previous conversation so Claude can reuse schema information and context from prior questions.

Args:

question: Natural language question about your data. continue_conversation: If True (default), continue existing

conversation. If False, start fresh.

max_iterations: Maximum tool-calling iterations (default: 10). timeout: Request timeout in seconds.

Returns:

QueryResult with answer, stdout, and raw_result.

Example:

result = client.claude_query("Find all employees in engineering")
print(result.answer)
# Follow-up (continues conversation by default):
result = client.claude_query("Now show their salaries")
clear_conversation() None

Clear conversation history while preserving all configuration.

Preserves AllegroGraph connection, API keys, query library settings, prompt caching configuration, and SHACL cache.

close()

Stop the GraphTalker server and close the underlying HTTP session.

condense_conversation(*, keep_recent: int = 2, verbose: bool = True) int | None

Condense conversation to reduce context size.

Keeps recent interactions intact and prunes intermediate tool calls from older interactions.

Args:

keep_recent: Number of recent interactions to keep intact. verbose: Print condensation details.

Returns:

Number of messages removed, or None if nothing to condense.

connect(protocol: str, host: str, port: int, catalog: str, repository: str, user: str, password: str) str

Initialize AllegroGraph connection.

Args:

protocol: “http” or “https”. host: AllegroGraph server hostname. port: AllegroGraph server port. catalog: Catalog name (”” or “/” for root catalog). repository: Repository name. user: AllegroGraph username. password: AllegroGraph password.

Returns:

Connection confirmation message.

delete_query(*, query_uri: str | None = None, query_title: str | None = None, repository: str | None = None) str

Delete a query from the query library.

Provide either query_uri or query_title (with repository).

Args:

query_uri: URI of the query to delete. query_title: Title of the query to delete. repository: Repository name (used with query_title).

Returns:

Deletion confirmation.

delete_session(session_id: str) None

Delete a saved session.

Args:

session_id: The session ID to delete.

eval(expression: str, *, timeout: float | None = None) EvalResult

Evaluate a Lisp expression on the eval server.

This is the low-level escape hatch. The expression is sent as-is and evaluated in the :agraph-claude-tools package.

Args:

expression: A Lisp expression as a string. timeout: Request timeout in seconds (overrides default).

Returns:

EvalResult with stdout, result, error, and parsed fields.

Raises:

ConnectionError: Cannot reach the eval server. AuthenticationError: Invalid or missing API key. EvalError: Lisp evaluation raised an error. TimeoutError: Request timed out. ServerError: Unexpected HTTP error.

execute_tool(tool_name: str, **params: Any) str

Execute a named tool directly (bypasses Claude).

This is a lower-level API for calling any of the 30+ tools directly. Parameter values are automatically converted to Lisp representations.

Args:

tool_name: Tool name (e.g., “sparql_query”, “get_shacl”). **params: Tool parameters as keyword arguments.

Returns:

Tool result as a string.

Example:

result = client.execute_tool("get_shacl")
result = client.execute_tool(
    "sparql_query",
    query="SELECT ?s WHERE { ?s ?p ?o } LIMIT 5",
)
generate_sparql(question: str, *, continue_conversation: bool = True, max_iterations: int = 10, timeout: float | None = None) str

Ask Claude to generate a SPARQL query for a natural language question.

Claude will fetch the schema, examine example queries, and iteratively test the query before returning the final version.

Args:

question: Natural language question to convert to SPARQL. continue_conversation: If True (default), continue conversation. max_iterations: Maximum tool-calling iterations. timeout: Request timeout in seconds.

Returns:

The generated SPARQL query string.

Example:

sparql = client.generate_sparql("How many products were sold last month?")
print(sparql)
get_pending_visualization(ref_id: str) PendingVisualization

Fetch a cached visualization config by its reference ID.

After Claude generates a chart and calls prepare_visualization, the config is cached server-side. Use this method to retrieve it for rendering without storing it permanently.

Also works for map configs from build_map_visualization.

Args:

ref_id: Reference ID (e.g. “viz-config-123-456” or “map-config-123-456”).

Returns:

PendingVisualization with config, type, description, and summary.

Raises:

EvalError: If the reference ID is not found (expired or invalid).

Example:

result = client.claude_query("Show product distribution as a pie chart")
# Extract ref_id from result.answer or result.stdout
viz = client.get_pending_visualization("viz-config-123-456")
print(viz.viz_type)  # "pie_chart"
print(viz.config)   # Chart.js config dict
get_token_cost_stats() TokenCostStats

Get structured token usage, cost, and context statistics.

Returns all token counts, estimated costs (USD), cache savings, and context window usage in a single structured object.

Returns:

TokenCostStats with all token/cost/context data.

Example:

stats = client.get_token_cost_stats()
print(f"Total cost: ${stats.total_cost:.4f}")
print(f"Context: {stats.context_percentage:.1f}% used")
if stats.cache_savings > 0:
    print(f"Cache saved: ${stats.cache_savings:.4f}")

# Per-query cost tracking:
before = client.get_token_cost_stats()
result = client.claude_query("Find all employees...")
after = client.get_token_cost_stats()
query_cost = after.total_cost - before.total_cost
get_variable(variable_name: str) Any

Get the value of a Lisp configuration variable.

Args:
variable_name: Variable name including asterisks

(e.g., “claude-model”).

Returns:

The parsed Python value.

get_visualizations(query_title: str, *, repository: str | None = None) List[Visualization]

Get all stored visualizations for a query from the query library.

Args:

query_title: Title of the query to get visualizations for. repository: Repository name filter (optional).

Returns:

List of Visualization objects.

Example:

vizs = client.get_visualizations("Product Distribution")
for v in vizs:
    print(f"{v.viz_type}: {v.description}")
health_check() bool

Check if the eval server is running and healthy.

Returns:

True if server is healthy, False otherwise.

Raises:

ConnectionError: Cannot reach the server.

list_all_queries(repository: str | None = None) str

List all queries in the query library.

Args:

repository: Repository name filter (optional).

Returns:

All stored queries as a string.

list_sessions(*, username: ~typing.Any = <object object>, repository: str | None = None) List[SessionInfo]

List saved sessions, optionally filtered by username and repository.

By default, filters to the client’s username (if set). Pass username=None explicitly to see all users’ sessions.

Args:
username: Filter by username. Defaults to self.username.

Pass None to list all sessions regardless of owner.

repository: Filter by repository name (optional).

Returns:

List of SessionInfo objects sorted by most-recently modified.

Example:

sessions = client.list_sessions()
for s in sessions:
    print(f"{s.title} ({s.message_count} messages)")
load_config(config_path: str) None

Load configuration from a JSON file on the server.

Args:

config_path: Absolute path to config.json on the server.

reset_context_stats() None

Reset all token, cost, and cache statistics counters.

restore_session(session_id: str) SessionInfo

Restore a previously saved session into the conversation history.

Args:

session_id: The session ID returned by save_session().

Returns:

SessionInfo with metadata about the restored session.

Raises:

EvalError: If the session is not found.

save_session(title: str, *, session_id: str | None = None) str

Save the current conversation as a named session.

Args:

title: Human-readable title for the session. session_id: Optional ID to update an existing session.

If None, a new session ID is generated.

Returns:

The session ID (can be used for restore/delete).

Example:

sid = client.save_session("Employee analysis Q3")
# Later:
client.restore_session(sid)
search_queries(search: str, repository: str | None = None) str

Search the query library by natural language description.

Args:

search: Search terms. repository: Repository name filter (optional).

Returns:

Matching queries as a string.

set_api_key(api_key: str) None

Set the Anthropic API key on the server.

Args:

api_key: Anthropic API key string.

set_max_iterations(n: int) int

Set the default maximum iterations for Claude queries.

Args:

n: Number of iterations (1-100).

Returns:

The new value.

set_meta_schema_repository(repo_string: str) bool

Set the meta-schema repository for the semantic layer.

Args:

repo_string: “catalog:repository” or just repository name.

Returns:

True if repository exists and was set, False otherwise.

set_model(model: str) None

Set the Claude model to use.

Args:

model: Model identifier (e.g., “claude-sonnet-4-5-20250929”).

set_prompt_caching(enabled: bool) None

Enable or disable prompt caching.

Args:

enabled: True to enable, False to disable.

set_query_library(repo_string: str) bool

Set the query library repository location.

Args:
repo_string: “catalog:repository”, “/:repo” for root catalog,

or just “repo” for current catalog.

Returns:

True if repository exists and was set, False otherwise.

set_repository(repo_string: str) str

Switch the active AllegroGraph repository.

Args:
repo_string: Repository in “catalog:repository” format,

or just “repository” for root catalog.

Returns:

Confirmation message.

set_variable(variable_name: str, value: Any) None

Set a Lisp configuration variable.

Args:
variable_name: Variable name including asterisks

(e.g., “max-tokens”).

value: Python value (str, int, float, bool, or None).

show_available_tools() str

List all available tools with descriptions.

Returns:

Formatted tool list.

show_conversation_history() str

Display the current conversation history.

Returns:

Formatted conversation history.

sparql_query(query: str) str

Execute a SPARQL SELECT/CONSTRUCT/ASK/DESCRIBE query directly.

Bypasses Claude – runs the query directly against AllegroGraph.

Args:

query: SPARQL query string.

Returns:

Query results as a string.

sparql_update(update: str) str

Execute a SPARQL UPDATE/INSERT/DELETE directly.

Bypasses Claude – runs the update directly against AllegroGraph.

Args:

update: SPARQL update string.

Returns:

Result confirmation string.

start_conversation() None

Start a new conversation, clearing previous history.

stop() None

Shut down the GraphTalker eval server process.

Calls (stop-eval-server) on the server, which unpublishes all HTTP endpoints and terminates the process. After this call the client is no longer usable.

Raises:

ConnectionError: Cannot reach the eval server. AuthenticationError: Invalid or missing API key.

store_query(title: str, description: str, sparql_query: str, repository: str | None = None) str

Store a SPARQL query in the query library.

Args:

title: Query title for identification. description: Human-readable description for search/discovery. sparql_query: The SPARQL query string. repository: Repository name override (defaults to current).

Returns:

Result confirmation.

store_visualization(query_title: str, viz_config: str, repository: str, *, viz_type: str | None = None, description: str | None = None, summary: str | None = None) str

Store a visualization for a query in the query library.

The viz_config can be a reference ID from prepare_visualization or build_map_visualization, in which case viz_type, description, and summary are auto-filled from the cached metadata.

Args:

query_title: Title of the query this visualization belongs to. viz_config: Reference ID string (preferred) or raw JSON config. repository: Repository name. viz_type: Visualization type (auto-filled when using reference ID). description: Description (auto-filled when using reference ID). summary: Markdown summary (auto-filled when using reference ID).

Returns:

Confirmation message with visualization ID.

Example:

# Using reference ID (preferred):
client.store_visualization("Product Distribution",
                           "viz-config-123-456", "my-repo")

# Using raw config:
client.store_visualization("Product Distribution",
                           '{"type":"pie",...}', "my-repo",
                           viz_type="pie_chart",
                           description="Product breakdown")
test_connection() bool

Test the AllegroGraph connection.

Returns:

True if connected successfully.

property username: str | None

Get the current username tag used for session management.

exception franz.graphtalker.GraphTalkerError

Bases: Exception

Base exception for all GraphTalker errors.

class franz.graphtalker.PendingVisualization(ref_id: str, viz_type: str | None = None, config: dict | None = None, description: str | None = None, summary: str | None = None)

Bases: object

A visualization config cached server-side, not yet stored permanently.

Created by Claude calling prepare_visualization (charts/network graphs) or build_map_visualization (maps). Retrievable by reference ID.

Attributes:

ref_id: Reference ID (e.g. “viz-config-123-456” or “map-config-123-456”). viz_type: Visualization type (e.g. “pie_chart”, “map”, “network_graph”). config: The visualization config as a parsed dict (Chart.js, GeoJSON, etc.). description: Description of what the visualization shows. summary: Optional markdown narrative summary.

__init__(ref_id: str, viz_type: str | None = None, config: dict | None = None, description: str | None = None, summary: str | None = None) None
config: dict | None = None
description: str | None = None
ref_id: str
summary: str | None = None
viz_type: str | None = None
exception franz.graphtalker.QueryAbortedError(message: str, stdout: str = '')

Bases: EvalError

Query was aborted by the user via abort_query().

Subclass of EvalError so existing except EvalError handlers still catch it, but callers can distinguish aborts if needed.

class franz.graphtalker.QueryResult(answer: str, stdout: str, raw_result: str)

Bases: object

Result from a Claude query (claude_query, claude_ask, generate_sparql).

Attributes:

answer: Claude’s final answer text. stdout: Full conversation output (iterations, tool calls, etc.). raw_result: The raw Lisp result string.

__init__(answer: str, stdout: str, raw_result: str) None
answer: str
raw_result: str
stdout: str
exception franz.graphtalker.ServerError(status_code: int, body: str)

Bases: GraphTalkerError

Eval server returned an unexpected HTTP error.

Attributes:

status_code: The HTTP status code. body: The response body.

__init__(status_code: int, body: str)
class franz.graphtalker.SessionInfo(session_id: str, title: str, username: str | None = None, repository: str = '', message_count: int = 0, created: str | None = None, modified: str | None = None)

Bases: object

Information about a saved conversation session.

Attributes:

session_id: Unique session identifier (fragment after # in URI). title: Human-readable session title. username: Owner username tag (None if not set). repository: Repository the session was saved against. message_count: Number of messages in the saved conversation. created: ISO 8601 creation timestamp. modified: ISO 8601 last-modified timestamp.

__init__(session_id: str, title: str, username: str | None = None, repository: str = '', message_count: int = 0, created: str | None = None, modified: str | None = None) None
created: str | None = None
message_count: int = 0
modified: str | None = None
repository: str = ''
session_id: str
title: str
username: str | None = None
exception franz.graphtalker.TimeoutError

Bases: GraphTalkerError

Request to eval server timed out.

class franz.graphtalker.TokenCostStats(input_tokens: int = 0, output_tokens: int = 0, cache_write_tokens: int = 0, cache_read_tokens: int = 0, total_tokens: int = 0, input_cost: float = 0.0, output_cost: float = 0.0, cache_write_cost: float = 0.0, cache_read_cost: float = 0.0, total_cost: float = 0.0, cost_without_cache: float = 0.0, cache_savings: float = 0.0, cache_savings_percent: float = 0.0, context_tokens: int = 0, context_percentage: float = 0.0, context_limit: int = 200000, message_count: int = 0)

Bases: object

Token usage, cost, and context statistics from get_token_cost_stats().

Provides a single structured view of all token counts, estimated costs, cache savings, and context window usage for the current session.

Attributes:

input_tokens: Total input tokens (cumulative). output_tokens: Total output tokens (cumulative). cache_write_tokens: Tokens used to create cache entries. cache_read_tokens: Tokens read from cache. total_tokens: Sum of all token categories. input_cost: Estimated cost for input tokens (USD). output_cost: Estimated cost for output tokens (USD). cache_write_cost: Estimated cost for cache writes (USD). cache_read_cost: Estimated cost for cache reads (USD). total_cost: Total estimated cost (USD). cost_without_cache: What it would have cost without caching (USD). cache_savings: Money saved by caching (USD). cache_savings_percent: Cache savings as percentage. context_tokens: Current conversation size (tokens). context_percentage: Current context window usage as percentage. context_limit: Anthropic’s context limit (200000). message_count: Number of messages in conversation.

__init__(input_tokens: int = 0, output_tokens: int = 0, cache_write_tokens: int = 0, cache_read_tokens: int = 0, total_tokens: int = 0, input_cost: float = 0.0, output_cost: float = 0.0, cache_write_cost: float = 0.0, cache_read_cost: float = 0.0, total_cost: float = 0.0, cost_without_cache: float = 0.0, cache_savings: float = 0.0, cache_savings_percent: float = 0.0, context_tokens: int = 0, context_percentage: float = 0.0, context_limit: int = 200000, message_count: int = 0) None
cache_read_cost: float = 0.0
cache_read_tokens: int = 0
cache_savings: float = 0.0
cache_savings_percent: float = 0.0
cache_write_cost: float = 0.0
cache_write_tokens: int = 0
context_limit: int = 200000
context_percentage: float = 0.0
context_tokens: int = 0
cost_without_cache: float = 0.0
input_cost: float = 0.0
input_tokens: int = 0
message_count: int = 0
output_cost: float = 0.0
output_tokens: int = 0
total_cost: float = 0.0
total_tokens: int = 0
class franz.graphtalker.Visualization(viz_id: str, viz_type: str, config: str, description: str = '', summary: str | None = None, created: str | None = None)

Bases: object

A visualization stored permanently in the query library.

Attributes:

viz_id: Visualization URI in the query library. viz_type: Visualization type (e.g. “bar_chart”, “pie_chart”). config: The visualization config as a string (JSON). description: Description of what the visualization shows. summary: Optional markdown narrative summary. created: ISO 8601 creation timestamp.

__init__(viz_id: str, viz_type: str, config: str, description: str = '', summary: str | None = None, created: str | None = None) None
config: str
created: str | None = None
description: str = ''
summary: str | None = None
viz_id: str
viz_type: str