Getting Started
Everything you need to build your first Lcore microservice, from installation to deployment.
Installation
Lcore requires Python 3.8+ and has zero required dependencies. The entire framework is a single file you can drop into any project. Every module Lcore uses ships with Python itself — no pip install needed beyond Lcore.
Via pip
pip install lcore
Single-file copy
Since Lcore is a single Python file, you can simply copy lcore.py into your project directory:
curl -O https://raw.githubusercontent.com/LusanSapkota/Lcore/main/lcore.py
Standard Library Only
Lcore is built entirely on Python's standard library. Here's what powers it under the hood — all included with every Python installation:
| Module | Used For |
|---|---|
hashlib, hmac | Password hashing (PBKDF2-SHA256), cookie signing, CSRF tokens |
gzip | Response compression middleware |
json | JSON serialization/deserialization |
concurrent.futures | Background task pool (ThreadPoolExecutor) |
threading | Thread-local request/response, concurrency safety |
asyncio | Async route handler support |
http.cookies | Cookie parsing and formatting |
mimetypes | Static file content type detection |
re | Route pattern matching and URL parsing |
logging | Structured request logging middleware |
os, sys, time, uuid | Core utilities, request IDs, file serving |
This means Lcore works anywhere Python runs — containers, serverless, air-gapped networks, embedded systems — with zero network access needed for dependencies.
Optional Dependencies
| Package | Purpose |
|---|---|
ujson | Faster JSON parsing (auto-detected, falls back to stdlib json) |
watchdog | Efficient file watching for hot reload |
jinja2 | Jinja2 template engine support |
mako | Mako template engine support |
gunicorn | Production WSGI server (Linux/macOS) |
waitress | Production WSGI server (Windows-compatible) |
gevent | Async WSGI server with coroutines |
eventlet | Alternative async WSGI server |
Your First App
Hello World
from lcore import Lcore
app = Lcore()
@app.route('/')
def index():
return 'Hello, World!'
app.run(port=8080)
Save this as app.py and run it:
python app.py
Visit http://localhost:8080 to see your app running.
JSON API
Return a dict from any handler and Lcore automatically serializes it as JSON with the correct Content-Type:
from lcore import Lcore, request, abort
app = Lcore()
users = {1: {'name': 'Alice'}, 2: {'name': 'Bob'}}
@app.route('/api/users/<id:int>')
def get_user(id):
if id not in users:
abort(404, 'User not found')
return users[id]
@app.route('/api/users', method='POST')
def create_user():
data = request.json
new_id = max(users.keys()) + 1
users[new_id] = data
return {'id': new_id, 'created': True}
app.run(port=8080, debug=True)
The debug=True flag enables detailed error pages and auto-reloading. Never use it in production.
What Happens Under the Hood
- Lcore compiles your route patterns into an optimized regex-based router
- A WSGI server (default:
wsgiref) starts listening on the given host/port - For each request, the router matches the URL and dispatches to your handler
- The response is processed through plugins (JSON serialization, templates) and middleware
- The WSGI response is sent back to the client
Core Concepts
WSGI Application
Lcore is a standard WSGI application. The Lcore class is callable and can be used with any WSGI server:
# Use with any WSGI server
from lcore import Lcore
app = Lcore()
# app is a WSGI callable: app(environ, start_response)
# Use with gunicorn: gunicorn app:app
# Note: async def handlers are accepted but block the worker thread. See warning below.
@app.route('/async')
async def async_handler():
# Pure CPU-only coroutines are fine; any I/O await will deadlock or stall
return {'result': 'ok'}
Lcore is a WSGI framework, not ASGI. When you use an async def route
handler, Lcore calls asyncio.run() in a dedicated thread. This means:
- The worker thread is blocked for the entire duration of the coroutine.
- No concurrency benefit. Async I/O does not parallelize requests.
- Loop-bound libraries will not work:
aiohttp,httpx,asyncpg,motor, and similar libraries require a persistent event loop and will malfunction or deadlock inside Lcore handlers.
Lcore emits a UserWarning at startup for every async def route registered.
For genuine async I/O concurrency, see the note below or use an ASGI framework (FastAPI, Starlette, Quart) in the meantime.
Full details
lcore-asgi
A new companion library, lcore-asgi, is in development and will unlock
full ASGI support for the Lcore ecosystem:
- WebSockets - real-time bidirectional communication
- True async concurrency - persistent event loop, non-blocking I/O
- Async library compatibility -
asyncpg,httpx,aiohttp,motor, and others will work correctly - HTTP/2 and Server-Sent Events
- Familiar API - same routing, middleware, hooks, and plugin model as Lcore WSGI
Thread-Local Objects
Lcore provides thread-local proxy objects that are safe to use in multi-threaded servers:
from lcore import request, response, ctx
@app.route('/example')
def example():
# request - current HTTP request (read-only)
method = request.method
ip = request.remote_addr
# response - outgoing HTTP response (writable)
response.content_type = 'application/json'
response.set_header('X-Custom', 'value')
# ctx - request context (carries state across middleware)
ctx.state['processed'] = True
user = ctx.user # set by auth middleware
return {'method': method, 'ip': ip}
Request Lifecycle
Every request flows through this pipeline:
Incoming Request
|
v
on_request_start hook
|
v
Middleware Pipeline (ordered by priority)
|
v
on_auth_resolved hook
|
v
Router matches URL -> Route
|
v
on_handler_enter hook
|
v
Plugin chain wraps handler
|
v
Route Handler executes
|
v
on_handler_exit hook
|
v
on_response_build hook
|
v
on_response_send hook
|
v
Response sent to client
Plugins vs Middleware
| Feature | Plugins | Middleware |
|---|---|---|
| Scope | Wraps individual route callbacks | Wraps the entire request pipeline |
| Access | Route config, callback function | Full request context (ctx) |
| Use case | JSON serialization, template rendering | Auth, CORS, logging, rate limiting |
| Registration | app.install(plugin) | app.use(middleware) |
Configuration
Every Lcore app has a config attribute backed by ConfigDict, a powerful configuration store with metadata, overlays, and validation.
Basic Usage
app = Lcore()
# Direct assignment
app.config['debug'] = True
app.config['my_app.db_url'] = 'sqlite:///data.db'
app.config['my_app.secret_key'] = 'change-me'
Loading Configuration
# From a Python dict
app.config.load_dict({
'debug': True,
'my_app': {
'db_url': 'sqlite:///data.db',
'secret_key': 'change-me'
}
})
# From environment variables (MYAPP_DB_URL -> db_url)
app.config.load_env('MYAPP_', strip_prefix=True)
# From a .env file
app.config.load_dotenv('.env')
# From a JSON file
app.config.load_json('config.json')
# From an INI file
app.config.load_config('settings.ini')
Config Validation
from dataclasses import dataclass
@dataclass
class AppConfig:
debug: bool = False
db_url: str = 'sqlite:///data.db'
max_upload: int = 10_000_000
app.config.load_dict({'debug': True, 'db_url': 'postgres://...'})
valid, errors = app.config.validate_config(AppConfig)
if not valid:
print('Config errors:', errors)
Configuration keys use dot notation for namespacing: my_app.db_url. When loading from dicts, nested dicts are flattened automatically.
Running Your App
Development
# Auto-reload on file changes + detailed error pages
app.run(host='localhost', port=8080, debug=True, reloader=True)
Production
# With Gunicorn (recommended for Linux/macOS)
app.run(server='gunicorn', host='0.0.0.0', port=8080)
# With Waitress (Windows-compatible)
app.run(server='waitress', host='0.0.0.0', port=8080)
# With Gevent (async)
app.run(server='gevent', host='0.0.0.0', port=8080)
# Or use gunicorn directly from CLI:
# gunicorn -w 4 -b 0.0.0.0:8080 app:app
Server Adapters
Lcore supports 20+ server adapters out of the box. Use server='auto' to automatically pick the best available:
app.run(server='auto', host='0.0.0.0', port=8080)
The default wsgiref server is single-threaded and intended for development only. Always use a production server like Gunicorn or Waitress for deployment.
Hot Reload
from lcore import WatchdogReloader
# Watchdog-based (efficient, requires watchdog package)
reloader = WatchdogReloader(app, paths=['./'], interval=1)
reloader.start()
# Or use the built-in reloader flag
app.run(reloader=True)
Command-Line Interface
Lcore can be run directly as a Python module with command-line options:
# Basic usage
python -m lcore myapp
# With options
python -m lcore myapp -b 0.0.0.0:8080 -s gunicorn --debug --reload
CLI Options
| Flag | Description |
|---|---|
--version | Show Lcore version |
-b, --bind ADDRESS | Bind address (host:port) |
-s, --server NAME | WSGI server backend (default: wsgiref) |
-p, --plugin MODULE | Load a plugin module |
-c, --conf FILE | Load configuration from a config file |
-C, --param KEY=VALUE | Override a config parameter |
--debug | Enable debug mode |
--reload | Auto-reload on file changes |
--docs | Open documentation website in browser |
The CLI also accepts .py file paths directly:
# All of these work
python -m lcore app
python -m lcore app.py
python -m lcore app:app
python -m lcore mypackage.app:application
Built-in API Documentation
When running in debug mode, Lcore automatically serves interactive API documentation at /docs, similar to FastAPI:
from lcore import Lcore
app = Lcore()
@app.route('/users/<id:int>')
def get_user(id):
'''Fetch a user by ID.'''
return {'id': id}
app.run(port=8080, debug=True)
# /docs and /docs/json are now available
Visit http://localhost:8080/docs to see the auto-generated API reference, or /docs/json for the raw JSON data.
API docs are only auto-enabled in debug mode. In production, call app.enable_docs() explicitly if you want to expose them.
Project Structure
Simple Microservice
my-service/
app.py # Main application
lcore.py # Framework (single file)
config.json # Configuration
.env # Environment variables
tests/
test_app.py # Tests
Multi-Module Service
my-service/
app.py # App setup & middleware
config.py # Configuration loading
lcore.py # Framework
modules/
__init__.py
users.py # Users sub-app (mounted)
products.py # Products sub-app (mounted)
auth.py # Auth middleware
templates/
error.html
tests/
test_users.py
test_products.py
With module mounting:
# app.py
from lcore import Lcore
from modules.users import users_app
from modules.products import products_app
app = Lcore()
app.mount('/api/users/', users_app)
app.mount('/api/products/', products_app)
app.run(server='gunicorn', port=8080)
# modules/users.py
from lcore import Lcore, request
users_app = Lcore()
@users_app.route('/')
def list_users():
return {'users': []}
@users_app.route('/<id:int>')
def get_user(id):
return {'id': id}
Lcore