A Python package for tracking and analyzing LLM usage across different models and applications. It is primarily designed as a library for integration into development process of LLM-based agentic workflow tooling, providing robust tracking capabilities. While its main use is as a library, it also provides a powerful CLI for scripting and batch workloads.
Keywords: LLM, accounting, usage tracking, cost management, token counting, agentic workflows, AI, Python
- Track usage of different LLM models
- Track usage by project
- Record token counts (prompt, completion, total)
- Track costs and execution times
- Support for local token counting
- Pluggable backend system (SQLite, CSV, and PostgreSQL backends supported)
- CLI interface for viewing and tracking usage statistics
- Support for tracking caller application and username
- Automatic database schema migration (for supported backends)
- Strict model name validation
- Automatic timestamp handling
- Comprehensive audit logging for all LLM interactions
- Retrieve remaining quota information after logging usage
- Optional enforcement of allowed project names with
projects
management commands
pip install llm-accounting
For specific database backends, install the corresponding optional dependencies:
# For SQLite (default)
pip install llm-accounting[sqlite]
# For PostgreSQL
pip install llm-accounting[postgresql]
The LLMAccounting
class automatically manages the database connection for its chosen backend. You can simply instantiate it and call its methods; the backend will ensure the connection is active when needed.
from llm_accounting import LLMAccounting
from datetime import datetime, timedelta
# Default backend (SQLite) is used if no backend is provided
# You can also set default project, app, and user names here
accounting = LLMAccounting(
project_name="my_default_project", # Optional: default project for all entries
app_name="my_default_app", # Optional: default caller name for all entries
user_name="my_default_user" # Optional: default username for all entries
)
# Track usage (model name is required, timestamp is optional)
# Parameters provided here will override the defaults set in the constructor
accounting.track_usage(
model="gpt-4", # Required: name of the LLM model
prompt_tokens=100,
completion_tokens=50,
total_tokens=150,
cost=0.002,
execution_time=1.5,
caller_name="my_app", # Optional: name of the calling application (overrides default app_name)
username="john_doe", # Optional: name of the user (overrides default user_name)
project="my_project", # Optional: name of the project (overrides default project_name)
timestamp=None # Optional: if None, current time will be used
)
# Track usage and get remaining limits
remaining = accounting.track_usage_with_remaining_limits(
model="gpt-4",
prompt_tokens=100,
completion_tokens=50,
total_tokens=150,
cost=0.002,
)
for limit, left in remaining:
print(f"{limit.scope} {limit.limit_type}: {left} remaining")
# Get statistics
end_date = datetime.now()
start_date = end_date - timedelta(days=7) # Last 7 days
stats = accounting.get_period_stats(start_date, end_date)
model_stats = accounting.get_model_stats(start_date, end_date)
rankings = accounting.get_model_rankings(start_date, end_date)
print(f"Total cost last 7 days: {stats.sum_cost}")
print(f"Model stats: {model_stats}")
print(f"Model rankings: {rankings}")
# When you are done with the accounting instance, it's good practice to close it.
# If used as a context manager, it will be closed automatically.
accounting.close()
Note: The LLMAccounting
class and its methods are synchronous. If you are integrating llm-accounting
into an asynchronous application, you should run its synchronous calls in a separate thread (e.g., using asyncio.to_thread
) to avoid blocking the event loop.
The following options can be used with any llm-accounting
command:
--db-file <path>
: Specifies the SQLite database file path. Only applicable when--db-backend
issqlite
.--db-backend <backend>
: Selects the database backend (sqlite
orpostgresql
). Defaults tosqlite
.--postgresql-connection-string <string>
: Connection string for the PostgreSQL database. Required when--db-backend
ispostgresql
. Can also be provided viaPOSTGRESQL_CONNECTION_STRING
environment variable.--audit-db-backend <backend>
: Backend for audit logs. Defaults to the value of--db-backend
.--audit-db-file <path>
: SQLite database file path for audit logs (when audit backend issqlite
).--audit-postgresql-connection-string <string>
: Connection string for the PostgreSQL audit log database. Can also be provided viaAUDIT_POSTGRESQL_CONNECTION_STRING
environment variable.--project-name <name>
: Default project name to associate with usage entries. Can be overridden by command-specific--project
.--app-name <name>
: Default application name to associate with usage entries. Can be overridden by command-specific--caller-name
.--user-name <name>
: Default user name to associate with usage entries. Can be overridden by command-specific--username
. Defaults to current system user.--enforce-project-names
: When set, project names supplied to commands must exist in the project dictionary.
# Track a new usage entry (model name is required, timestamp is optional)
llm-accounting track \
--model gpt-4 \
--prompt-tokens 100 \
--completion-tokens 50 \
--total-tokens 150 \
--cost 0.002 \
--execution-time 1.5 \
--caller-name my_app \
--username john_doe \
--project my_project \
--timestamp "2024-01-01T12:00:00" \
--cached-tokens 20 \
--reasoning-tokens 10
Logs a generic event to the audit log. This is useful for recording custom events, feedback, or other notable occurrences related to LLM interactions that might not fit the standard usage tracking.
Arguments:
--app-name
(string, required): Name of the application.--user-name
(string, required): Name of the user.--model
(string, required): Name of the LLM model associated with the event.--log-type
(string, required): Type of the log entry (e.g., 'prompt', 'response', 'event', or any custom type).--prompt-text
(string, optional): Text of the prompt, if relevant.--response-text
(string, optional): Text of the response, if relevant.--remote-completion-id
(string, optional): ID of the remote completion, if relevant.--project
(string, optional): Project name to associate with the event.--timestamp
(string, optional): Timestamp of the event (YYYY-MM-DD HH:MM:SS or ISO format, e.g., "2023-10-27T14:30:00Z". Defaults to current time).
Example:
llm-accounting log-event \
--app-name my-app \
--user-name testuser \
--model gpt-4 \
--log-type event \
--prompt-text "User reported positive feedback." \
--project "Alpha" \
--timestamp "2024-01-15T10:30:00"
# Show today's stats
llm-accounting stats --daily
# Show stats for a custom period
llm-accounting stats --start 2024-01-01 --end 2024-01-31
# Show most recent entries
llm-accounting tail
# Show last 5 entries
llm-accounting tail -n 5
# Delete all entries
llm-accounting purge
# Execute custom SQL queries (if backend supports it and it's enabled)
llm-accounting select --query "SELECT model, COUNT(*) as count FROM accounting_entries GROUP BY model"
The llm-accounting limits
command allows you to manage usage limits for your LLM interactions. It now supports advanced multi-dimensional limiting and rolling time windows. You can set, list, and delete limits based on various scopes (global, model, user, caller, project) and types (requests, input tokens, output tokens, total tokens, cost) over specified time intervals.
Wildcards can be specified using *
for any of the model, username, caller name, or project name fields. A max-value
of 0
denies all matching usage, while -1
allows unlimited usage.
For example, you can deny all models for a user with --model "*" --max-value 0
and then add specific limits with --max-value -1
to allow certain models or projects.
Set a new usage limit. For example, to set a global limit of 1000 requests per day:
llm-accounting limits set \
--scope GLOBAL \
--limit-type requests \
--max-value 1000 \
--interval-unit day \
--interval-value 1
To set a cost limit of $5.00 per hour for a specific user:
llm-accounting limits set \
--scope USER \
--username john_doe \
--limit-type cost \
--max-value 5.00 \
--interval-unit hour \
--interval-value 1
To set an input token limit of 50000 tokens per week for a specific model:
llm-accounting limits set \
--scope MODEL \
--model gpt-4 \
--limit-type input_tokens \
--max-value 50000 \
--interval-unit week \
--interval-value 1
To set a cost limit of $10.00 per day for a specific project:
llm-accounting limits set \
--scope PROJECT \
--project my_project \
--limit-type cost \
--max-value 10.00 \
--interval-unit day \
--interval-value 1
List all configured usage limits:
llm-accounting limits list
Delete a usage limit by its ID (you can find the ID using llm-accounting limits list
):
llm-accounting limits delete --id 1
You can specify the database backend directly via the CLI using the --db-backend
option. This allows you to switch between sqlite
(default) and postgresql
without modifying code.
Audit logs can optionally use a different backend by providing the --audit-db-backend
and related options.
The projects
command manages the list of allowed project names when --enforce-project-names
is used.
llm-accounting projects add MyProj
llm-accounting projects list
llm-accounting projects update MyProj NewName
llm-accounting projects delete NewName
# Use SQLite backend (default behavior, --db-backend can be omitted)
llm-accounting --db-backend sqlite --db-file my_sqlite_db.sqlite stats --daily
# Use PostgreSQL backend
# Requires POSTGRESQL_CONNECTION_STRING environment variable to be set, or provide it directly
llm-accounting --db-backend postgresql --postgresql-connection-string "postgresql://user:pass@localhost:5432/mydatabase" stats --daily
# Example: Track usage with PostgreSQL backend
llm-accounting --db-backend postgresql \
--postgresql-connection-string "postgresql://user:pass@localhost:5432/mydatabase" \
track \
--model gpt-4 \
--prompt-tokens 10 \
--cost 0.0001
The CLI can be easily integrated into shell scripts. Here's an example:
#!/bin/bash
# Track usage after an LLM API call
llm-accounting track \
--model "gpt-4" \
--prompt-tokens "$PROMPT_TOKENS" \
--completion-tokens "$COMPLETION_TOKENS" \
--total-tokens "$TOTAL_TOKENS" \
--cost "$COST" \
--execution-time "$EXECUTION_TIME" \
--caller-name "my_script" \
--username "$USER"
# Check daily usage
llm-accounting stats --daily
The database schema generally includes the following tables and key fields (specifics might vary slightly by backend, but PostgreSQLBackend
adheres to this structure):
accounting_entries
Table:
id
: SERIAL PRIMARY KEY - Unique identifier for the entry.model_name
: VARCHAR(255) NOT NULL - Name of the LLM model.prompt_tokens
: INTEGER - Number of tokens in the prompt.completion_tokens
: INTEGER - Number of tokens in the completion.total_tokens
: INTEGER - Total tokens (prompt + completion).local_prompt_tokens
: INTEGER - Locally counted prompt tokens.local_completion_tokens
: INTEGER - Locally counted completion tokens.local_total_tokens
: INTEGER - Total locally counted tokens.cost
: DOUBLE PRECISION NOT NULL - Cost of the API call.execution_time
: DOUBLE PRECISION - Execution time in seconds.timestamp
: TIMESTAMP WITHOUT TIME ZONE DEFAULT CURRENT_TIMESTAMP - Timestamp of the usage.caller_name
: VARCHAR(255) - Optional identifier for the calling application/script.username
: VARCHAR(255) - Optional identifier for the user.project_name
: VARCHAR(255) - Optional identifier for the project.cached_tokens
: INTEGER - Number of tokens retrieved from cache.reasoning_tokens
: INTEGER - Number of tokens used for model reasoning/tool use.
usage_limits
Table (for defining quotas/limits):
id
: SERIAL PRIMARY KEYscope
: VARCHAR(50) NOT NULL (e.g., 'USER', 'GLOBAL')limit_type
: VARCHAR(50) NOT NULL (e.g., 'COST', 'REQUESTS')max_value
: DOUBLE PRECISION NOT NULLinterval_unit
: VARCHAR(50) NOT NULL (e.g., 'HOURLY', 'DAILY')interval_value
: INTEGER NOT NULLmodel_name
: VARCHAR(255) (Optional, for model-specific limits)username
: VARCHAR(255) (Optional, for user-specific limits)caller_name
: VARCHAR(255) (Optional, for caller-specific limits)created_at
: TIMESTAMP WITHOUT TIME ZONE DEFAULT CURRENT_TIMESTAMPupdated_at
: TIMESTAMP WITHOUT TIME ZONE DEFAULT CURRENT_TIMESTAMP
audit_log_entries
Table (for detailed event logging):
id
: SERIAL PRIMARY KEYtimestamp
: TIMESTAMPTZ NOT NULLapp_name
: VARCHAR(255) NOT NULLuser_name
: VARCHAR(255) NOT NULLmodel
: VARCHAR(255) NOT NULLprompt_text
: TEXTresponse_text
: TEXTremote_completion_id
: VARCHAR(255)project
: VARCHAR(255)log_type
: TEXT NOT NULL (e.g., 'prompt', 'response', 'event')
Note: The id
fields are managed internally by the database.
This project uses Alembic to manage database schema migrations, working in conjunction with our SQLAlchemy models (defined in src/llm_accounting/models/
). While SQLAlchemy defines the desired schema, Alembic is used to generate and apply the necessary database changes.
When you make changes to the SQLAlchemy models that require a schema alteration (e.g., adding a table, adding a column, changing a column type), you need to generate a new migration script using Alembic.
-
Ensure your development database is accessible and reflects the schema before your new model changes. Alembic compares your models against the live database (specifically, the state recorded in its
alembic_version
table) to generate the migration. It's usually best to have your database upgraded to the latest revision before generating a new one. -
Make your changes to the SQLAlchemy models in the
src/llm_accounting/models/
directory. -
Run the following command from the project root:
LLM_ACCOUNTING_DB_URL="your_database_connection_string" alembic revision -m "descriptive_migration_name" --autogenerate
- Replace
"your_database_connection_string"
with the actual connection string for your development database.- For SQLite (default development):
sqlite:///./data/accounting.sqlite
- For PostgreSQL:
postgresql://user:pass@host:port/dbname
(use your actual credentials and host)
- For SQLite (default development):
- Replace
"descriptive_migration_name"
with a short, meaningful description of the changes (e.g.,add_user_email_column
,create_indexes_for_timestamps
). This becomes part of the migration filename.
- Replace
-
Review the generated migration script in the
alembic/versions/
directory. Ensure it accurately reflects the intended changes. You might need to adjust it, especially for complex changes not perfectly detected by autogenerate (e.g., specific index types, constraints, or data migrations). -
Commit the new migration script along with your model changes.
Database migrations are automatically applied when the LLMAccounting
service starts (specifically, when an LLMAccounting
instance is created). The application will check for any pending migrations and attempt to upgrade the database to the latest version using Alembic.
The default backend is SQLite, which stores data in a local file. Below is a comprehensive example demonstrating how to configure a custom SQLite database file, track usage, set and check usage limits, and utilize the audit logger.
import os
from llm_accounting import LLMAccounting
from llm_accounting.backends.sqlite import SQLiteBackend
from llm_accounting.models.limits import LimitScope, LimitType, TimeInterval
import time
from datetime import datetime, timedelta
from llm_accounting.audit_log import AuditLogger
# Define custom database filenames
custom_accounting_db_filename = "my_custom_accounting.sqlite"
custom_audit_db_filename = "my_custom_audit.sqlite"
print(f"Initializing LLMAccounting with custom DB: {custom_accounting_db_filename}")
# 1. Initialize SQLiteBackend with the custom filename
sqlite_backend = SQLiteBackend(db_path=custom_accounting_db_filename)
# 2. Pass the custom backend to LLMAccounting
# The backend will automatically manage its connection.
accounting = LLMAccounting(backend=sqlite_backend)
print(f"LLMAccounting initialized. Actual DB path: {accounting.get_db_path()}")
# Example usage: track some usage
accounting.track_usage(
model="gpt-4",
prompt_tokens=100,
completion_tokens=50,
cost=0.01,
username="example_user",
caller_name="example_app"
)
print("Usage tracked successfully.")
# Verify stats (optional)
end_time = datetime.now()
start_time = end_time - timedelta(days=1)
stats = accounting.get_period_stats(start_time, end_time)
print(f"Stats for last 24 hours: {stats.sum_cost:.4f} cost, {stats.sum_total_tokens} tokens")
print("\n--- Testing Usage Limits ---")
# Set a global limit: 10 requests per minute
print("Setting a global limit: 10 requests per minute...")
accounting.set_usage_limit(
scope=LimitScope.GLOBAL,
limit_type=LimitType.REQUESTS,
max_value=10,
interval_unit=TimeInterval.MINUTE,
interval_value=1
)
print("Global limit set.")
# Simulate requests and check quota
for i in range(1, 15): # Try 14 requests to exceed the limit
model = "gpt-3.5-turbo"
username = "test_user"
caller_name = "test_app"
input_tokens = 10
allowed, reason = accounting.check_quota(
model=model,
username=username,
caller_name=caller_name,
input_tokens=input_tokens
)
if allowed:
print(f"Request {i}: ALLOWED. Tracking usage...")
accounting.track_usage(
model=model,
prompt_tokens=input_tokens,
cost=0.0001,
username=username,
caller_name=caller_name
)
else:
print(f"Request {i}: DENIED. Reason: {reason}")
# Small delay to simulate real-world requests, but not enough to reset minute limit
time.sleep(0.1)
# It's good practice to explicitly close the accounting instance when done,
# though the backend methods will auto-connect if needed for subsequent calls.
accounting.close()
print(f"\nInitializing AuditLogger with custom DB: {custom_audit_db_filename}")
# Initialize AuditLogger with the custom filename
with AuditLogger(db_path=custom_audit_db_filename) as audit_logger:
print(f"AuditLogger initialized. Actual DB path: {audit_logger.get_db_path()}")
# Example usage: log a prompt
audit_logger.log_prompt(
app_name="my_app",
user_name="test_user",
model="gpt-3.5-turbo",
prompt_text="Hello, how are you?"
)
print("Prompt logged successfully.")
# Example usage: log a response
audit_logger.log_response(
app_name="my_app",
user_name="test_user",
model="gpt-3.5-turbo",
response_text="I am doing well, thank you!",
remote_completion_id="comp_123"
)
print("Response logged successfully.")
# Example usage: get audit log entries
print("\nRetrieving audit log entries...")
entries = audit_logger.get_entries(limit=5)
for entry in entries:
print(f" [{entry.timestamp}] App: {entry.app_name}, User: {entry.user_name}, Model: {entry.model}, Type: {entry.log_type}")
if entry.prompt_text:
print(f" Prompt: {entry.prompt_text[:50]}...")
if entry.response_text:
print(f" Response: {entry.response_text[:50]}...")
print(f"Retrieved {len(entries)} audit log entries.")
# Clean up the created database files (for example purposes)
print("\nCleaning up created database files...")
if os.path.exists(custom_accounting_db_filename):
os.remove(custom_accounting_db_filename)
print(f"Removed {custom_accounting_db_filename}")
if os.path.exists(custom_audit_db_filename):
os.remove(custom_audit_db_filename)
print(f"Removed {custom_audit_db_filename}")
print("\nExample complete.")
The PostgreSQLBackend
provides a reference implementation for using a PostgreSQL database with llm-accounting
. It can be used with any standard PostgreSQL instance, including locally deployed ones, or with hosted/cloud PostgreSQL instances like Neon.
1. Set Up Your PostgreSQL Database (User's Responsibility):
To use PostgreSQLBackend
, you'll need access to a PostgreSQL database instance. This can be:
- A local PostgreSQL server: Install PostgreSQL on your machine and create a database.
- A hosted PostgreSQL service: Use a cloud provider like Neon, AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL, etc.
Once you have a database, obtain its connection string (URI format). It will look something like this:
postgresql://<user>:<password>@<host>:<port>/<dbname>
For cloud services like Neon, sslmode=require
might be necessary:
postgresql://<user>:<password>@<host>.neon.tech:<port>/<dbname>?sslmode=require
2. Install Dependencies:
The PostgreSQLBackend
requires the psycopg2-binary
package to communicate with PostgreSQL databases. You can install it as an extra dependency:
pip install llm-accounting[postgresql]
3. Configuration:
The PostgreSQLBackend
primarily expects the database connection string to be available via the POSTGRESQL_CONNECTION_STRING
environment variable.
export POSTGRESQL_CONNECTION_STRING="postgresql://your_user:your_password@your_host:5432/your_dbname"
Replace the placeholder values with your actual PostgreSQL connection string.
Alternatively, if you are instantiating PostgreSQLBackend
manually in your code, you can pass the connection string directly to its constructor (though using the environment variable is often preferred for flexibility).
4. Usage Example:
To use the PostgreSQLBackend
, you need to instantiate it and pass it to the LLMAccounting
class:
from llm_accounting import LLMAccounting
from llm_accounting.backends.postgresql import PostgreSQLBackend # Import the PostgreSQLBackend
# from datetime import datetime # if you are passing timestamps or querying by date
# Option 1: Connection string from environment variable POSTGRESQL_CONNECTION_STRING
# Ensure POSTGRESQL_CONNECTION_STRING is set in your environment before running the script.
# For example: export POSTGRESQL_CONNECTION_STRING="your_postgresql_uri_here"
postgresql_backend_env = PostgreSQLBackend() # Reads from environment variable
accounting_postgresql_env = LLMAccounting(backend=postgresql_backend_env)
# The backend will automatically manage its connection.
# Example: Track usage
accounting_postgresql_env.track_usage(
model="gpt-3.5-turbo",
prompt_tokens=50,
completion_tokens=100,
cost=0.00015
)
print("Usage tracked with PostgreSQL backend (from env var).")
# Example: Get stats for a period
end_date = datetime.now()
start_date = end_date - timedelta(days=7) # Last 7 days
stats = accounting_postgresql_env.get_period_stats(start_date, end_date)
print(f"PostgreSQL backend stats: {stats.sum_cost}")
# Option 2: Pass connection string directly
# Replace with your actual connection string if testing this way.
postgresql_connection_str = "postgresql://user:pass@localhost:5432/mydatabase"
postgresql_backend_direct = PostgreSQLBackend(postgresql_connection_string=postgresql_connection_str)
accounting_postgresql_direct = LLMAccounting(backend=postgresql_backend_direct)
accounting_postgresql_direct.track_usage(
model="gpt-4",
prompt_tokens=200,
completion_tokens=400,
cost=0.006
)
print("Usage tracked with PostgreSQL backend (direct connection string).")
# It's good practice to explicitly close the accounting instances when done.
accounting_postgresql_env.close()
accounting_postgresql_direct.close()
Error Handling/Notes:
- The
PostgreSQLBackend
includes error handling for common database connection and operation issues, raisingConnectionError
orpsycopg2.Error
as appropriate. - Ensure your PostgreSQL database instance is active and accessible from the environment where your application is running.
- Refer to your PostgreSQL documentation (or cloud provider's documentation like Neon) for details on managing your database, connection pooling, and security best practices.
The llm-accounting
library is designed with a pluggable backend system, allowing you to integrate with any database or data storage solution by implementing the BaseBackend
abstract class. This is particularly useful for integrating with existing infrastructure or custom data handling requirements.
Here's how you can implement your own custom backend, using the MockBackend
as a simplified example:
-
Define your Backend Class: Create a new class that inherits from
llm_accounting.backends.base.BaseBackend
. You will need to implement all abstract methods defined inBaseBackend
.# my_custom_backend.py from datetime import datetime from typing import Dict, List, Tuple, Any, Optional from llm_accounting.backends.base import BaseBackend, UsageEntry, UsageStats class MyCustomBackend(BaseBackend): def __init__(self): self.usage_storage = [] # Example: a list to store UsageEntry objects # Add storage for limits if needed def initialize(self) -> None: print("MyCustomBackend: Initializing connection/resources...") # Implement your database connection or resource setup here def _ensure_connected(self) -> None: print("MyCustomBackend: Ensuring connection is active...") # Implement logic to ensure connection is active, e.g., self.initialize() if not connected def insert_usage(self, entry: UsageEntry) -> None: self._ensure_connected() print(f"MyCustomBackend: Inserting usage for model {entry.model}") self.usage_storage.append(entry) # Implement logic to save 'entry' to your database # ... (implement other abstract methods like get_period_stats, get_model_stats, etc.) ... # ... (get_model_rankings, purge, tail, close, execute_query) ... # ... (get_usage_limits, insert_usage_limit, get_accounting_entries_for_quota) ... # ... (delete_usage_limit) ... def get_period_stats(self, start: datetime, end: datetime) -> UsageStats: self._ensure_connected() # Dummy implementation return UsageStats() def get_model_stats(self, start: datetime, end: datetime) -> List[Tuple[str, UsageStats]]: self._ensure_connected() # Dummy implementation return [] def get_model_rankings(self, start: datetime, end: datetime) -> Dict[str, List[Tuple[str, Any]]]: self._ensure_connected() # Dummy implementation return {} def purge(self) -> None: self._ensure_connected() self.usage_storage = [] def tail(self, n: int = 10) -> List[UsageEntry]: self._ensure_connected() return self.usage_storage[-n:] def close(self) -> None: print("MyCustomBackend: Closing connection/resources...") def execute_query(self, query: str) -> List[Dict[str, Any]]: self._ensure_connected() print(f"MyCustomBackend: Executing custom query: {query}") return [] def get_usage_limits(self, scope: Optional[LimitScope] = None, model: Optional[str] = None, username: Optional[str] = None, caller_name: Optional[str] = None) -> List[UsageLimit]: self._ensure_connected() # Dummy implementation return [] def get_accounting_entries_for_quota(self, start_time: datetime, limit_type: LimitType, model: Optional[str] = None, username: Optional[str] = None, caller_name: Optional[str] = None) -> float: self._ensure_connected() # Dummy implementation return 0.0 def insert_usage_limit(self, limit: UsageLimit) -> None: self._ensure_connected() # Dummy implementation pass def delete_usage_limit(self, limit_id: int) -> None: self._ensure_connected() # Dummy implementation pass
-
Integrate with
LLMAccounting
: Once your custom backend is implemented, you can pass an instance of it to theLLMAccounting
constructor:
from llm_accounting import LLMAccounting
from my_custom_backend import MyCustomBackend # Import your custom backend
# Instantiate your custom backend
custom_backend = MyCustomBackend()
# Pass it to LLMAccounting
accounting_custom = LLMAccounting(backend=custom_backend)
# Now, all accounting operations will use your custom backend
accounting_custom.track_usage(model="custom_model", prompt_tokens=10, cost=0.001)
stats = accounting_custom.get_period_stats(datetime.now(), datetime.now())
print(f"Custom backend stats: {stats.sum_cost}")
accounting_custom.close()
By following this pattern, you can extend llm-accounting
to work seamlessly with virtually any data storage solution, providing maximum flexibility for your application's needs.
- LLM Wrapper MCP Server (
llm-wrapper-mcp-server
)- GitHub: https://github.com/matdev83/llm-wrapper-mcp-server
- Description: A Model Context Protocol (MCP) server wrapper designed to facilitate seamless interaction with various Large Language Models (LLMs) through a standardized interface. This project enables developers to integrate LLM capabilities into their applications by providing a robust and flexible server that handles LLM calls, tool execution, and result processing.
We will be adding examples of projects that utilize llm-accounting
in the nearest future to demonstrate reference usage.
We welcome contributions to LLM Accounting! To ensure a smooth and efficient collaboration, please follow the typical fork-based contributing workflow outlined below:
-
Fork the Repository:
- Go to the LLM Accounting GitHub repository.
- Click the "Fork" button in the top right corner. This will create a copy of the repository under your GitHub account.
-
Clone Your Fork:
-
Once you have forked the repository, clone your fork to your local machine:
git clone https://github.com/matdev83/llm-accounting.git cd llm-accounting
-
-
Add Upstream Remote:
-
Add the original repository as an "upstream" remote to your local clone. This allows you to fetch changes from the main project:
git remote add upstream https://github.com/matdev83/llm-accounting.git
-
-
Create a New Branch:
-
Before making any changes, create a new branch for your feature or bug fix. Use a descriptive name:
git checkout -b feature/your-feature-name # or git checkout -b bugfix/your-bug-fix
-
-
Make Your Changes:
- Implement your feature or fix the bug. Ensure your code adheres to the project's coding standards and includes relevant tests.
-
Test Your Changes:
- Run the existing test suite and add new tests for your changes to ensure everything works as expected and no regressions are introduced.
-
Commit Your Changes:
-
Commit your changes with a clear and concise commit message:
git add . git commit -m "feat: Add your new feature" # or git commit -m "fix: Resolve bug in X"
-
-
Push to Your Fork:
-
Push your new branch to your forked repository on GitHub:
git push origin feature/your-feature-name
-
-
Create a Pull Request (PR):
- Go to your forked repository on GitHub.
- You should see a "Compare & pull request" button or a prompt to create a new pull request from your recently pushed branch.
- Click it, fill out the PR template, describe your changes, and submit the pull request to the
main
branch of the originalllm-accounting
repository.
-
Address Feedback:
- Project maintainers will review your PR. Be prepared to address any feedback or make further changes if requested.
Thank you for contributing to LLM Accounting!
The project follows a strict Git branch structure to ensure stability and facilitate collaborative development:
main
: This branch should only contain production-ready, 100% test passing code, merged by project maintainer. Contributors should not issue PRs or push changes directly into main.dev
: Contributors, agents and developers are requested to use the latest dev branch and to publish their PRs based on the dev branch only.
This project is licensed under the MIT License - see the LICENSE file for details.