Getting Started

Everything you need to build your first Lcore microservice, from installation to deployment.

Installation

Lcore requires Python 3.8+ and has zero required dependencies. The entire framework is a single file you can drop into any project. Every module Lcore uses ships with Python itself — no pip install needed beyond Lcore.

Via pip

pip install lcore

Single-file copy

Since Lcore is a single Python file, you can simply copy lcore.py into your project directory:

curl -O https://raw.githubusercontent.com/LusanSapkota/Lcore/main/lcore.py

Standard Library Only

Lcore is built entirely on Python's standard library. Here's what powers it under the hood — all included with every Python installation:

ModuleUsed For
hashlib, hmacPassword hashing (PBKDF2-SHA256), cookie signing, CSRF tokens
gzipResponse compression middleware
jsonJSON serialization/deserialization
concurrent.futuresBackground task pool (ThreadPoolExecutor)
threadingThread-local request/response, concurrency safety
asyncioAsync route handler support
http.cookiesCookie parsing and formatting
mimetypesStatic file content type detection
reRoute pattern matching and URL parsing
loggingStructured request logging middleware
os, sys, time, uuidCore utilities, request IDs, file serving

This means Lcore works anywhere Python runs — containers, serverless, air-gapped networks, embedded systems — with zero network access needed for dependencies.

Optional Dependencies

PackagePurpose
ujsonFaster JSON parsing (auto-detected, falls back to stdlib json)
watchdogEfficient file watching for hot reload
jinja2Jinja2 template engine support
makoMako template engine support
gunicornProduction WSGI server (Linux/macOS)
waitressProduction WSGI server (Windows-compatible)
geventAsync WSGI server with coroutines
eventletAlternative async WSGI server

Your First App

Hello World

from lcore import Lcore

app = Lcore()

@app.route('/')
def index():
    return 'Hello, World!'

app.run(port=8080)

Save this as app.py and run it:

python app.py

Visit http://localhost:8080 to see your app running.

JSON API

Return a dict from any handler and Lcore automatically serializes it as JSON with the correct Content-Type:

from lcore import Lcore, request, abort

app = Lcore()

users = {1: {'name': 'Alice'}, 2: {'name': 'Bob'}}

@app.route('/api/users/<id:int>')
def get_user(id):
    if id not in users:
        abort(404, 'User not found')
    return users[id]

@app.route('/api/users', method='POST')
def create_user():
    data = request.json
    new_id = max(users.keys()) + 1
    users[new_id] = data
    return {'id': new_id, 'created': True}

app.run(port=8080, debug=True)
Tip

The debug=True flag enables detailed error pages and auto-reloading. Never use it in production.

What Happens Under the Hood

  1. Lcore compiles your route patterns into an optimized regex-based router
  2. A WSGI server (default: wsgiref) starts listening on the given host/port
  3. For each request, the router matches the URL and dispatches to your handler
  4. The response is processed through plugins (JSON serialization, templates) and middleware
  5. The WSGI response is sent back to the client

Core Concepts

WSGI Application

Lcore is a standard WSGI application. The Lcore class is callable and can be used with any WSGI server:

# Use with any WSGI server
from lcore import Lcore
app = Lcore()

# app is a WSGI callable: app(environ, start_response)
# Use with gunicorn: gunicorn app:app

# Note: async def handlers are accepted but block the worker thread. See warning below.
@app.route('/async')
async def async_handler():
    # Pure CPU-only coroutines are fine; any I/O await will deadlock or stall
    return {'result': 'ok'}
async def handlers - WSGI constraint

Lcore is a WSGI framework, not ASGI. When you use an async def route handler, Lcore calls asyncio.run() in a dedicated thread. This means:

  • The worker thread is blocked for the entire duration of the coroutine.
  • No concurrency benefit. Async I/O does not parallelize requests.
  • Loop-bound libraries will not work: aiohttp, httpx, asyncpg, motor, and similar libraries require a persistent event loop and will malfunction or deadlock inside Lcore handlers.

Lcore emits a UserWarning at startup for every async def route registered. For genuine async I/O concurrency, see the note below or use an ASGI framework (FastAPI, Starlette, Quart) in the meantime. Full details

Coming Soon: lcore-asgi

A new companion library, lcore-asgi, is in development and will unlock full ASGI support for the Lcore ecosystem:

  • WebSockets - real-time bidirectional communication
  • True async concurrency - persistent event loop, non-blocking I/O
  • Async library compatibility - asyncpg, httpx, aiohttp, motor, and others will work correctly
  • HTTP/2 and Server-Sent Events
  • Familiar API - same routing, middleware, hooks, and plugin model as Lcore WSGI

Thread-Local Objects

Lcore provides thread-local proxy objects that are safe to use in multi-threaded servers:

from lcore import request, response, ctx

@app.route('/example')
def example():
    # request - current HTTP request (read-only)
    method = request.method
    ip = request.remote_addr

    # response - outgoing HTTP response (writable)
    response.content_type = 'application/json'
    response.set_header('X-Custom', 'value')

    # ctx - request context (carries state across middleware)
    ctx.state['processed'] = True
    user = ctx.user  # set by auth middleware

    return {'method': method, 'ip': ip}

Request Lifecycle

Every request flows through this pipeline:

Incoming Request
    |
    v
on_request_start hook
    |
    v
Middleware Pipeline (ordered by priority)
    |
    v
on_auth_resolved hook
    |
    v
Router matches URL -> Route
    |
    v
on_handler_enter hook
    |
    v
Plugin chain wraps handler
    |
    v
Route Handler executes
    |
    v
on_handler_exit hook
    |
    v
on_response_build hook
    |
    v
on_response_send hook
    |
    v
Response sent to client

Plugins vs Middleware

FeaturePluginsMiddleware
ScopeWraps individual route callbacksWraps the entire request pipeline
AccessRoute config, callback functionFull request context (ctx)
Use caseJSON serialization, template renderingAuth, CORS, logging, rate limiting
Registrationapp.install(plugin)app.use(middleware)

Configuration

Every Lcore app has a config attribute backed by ConfigDict, a powerful configuration store with metadata, overlays, and validation.

Basic Usage

app = Lcore()

# Direct assignment
app.config['debug'] = True
app.config['my_app.db_url'] = 'sqlite:///data.db'
app.config['my_app.secret_key'] = 'change-me'

Loading Configuration

# From a Python dict
app.config.load_dict({
    'debug': True,
    'my_app': {
        'db_url': 'sqlite:///data.db',
        'secret_key': 'change-me'
    }
})

# From environment variables (MYAPP_DB_URL -> db_url)
app.config.load_env('MYAPP_', strip_prefix=True)

# From a .env file
app.config.load_dotenv('.env')

# From a JSON file
app.config.load_json('config.json')

# From an INI file
app.config.load_config('settings.ini')

Config Validation

from dataclasses import dataclass

@dataclass
class AppConfig:
    debug: bool = False
    db_url: str = 'sqlite:///data.db'
    max_upload: int = 10_000_000

app.config.load_dict({'debug': True, 'db_url': 'postgres://...'})
valid, errors = app.config.validate_config(AppConfig)
if not valid:
    print('Config errors:', errors)
Note

Configuration keys use dot notation for namespacing: my_app.db_url. When loading from dicts, nested dicts are flattened automatically.

Running Your App

Development

# Auto-reload on file changes + detailed error pages
app.run(host='localhost', port=8080, debug=True, reloader=True)

Production

# With Gunicorn (recommended for Linux/macOS)
app.run(server='gunicorn', host='0.0.0.0', port=8080)

# With Waitress (Windows-compatible)
app.run(server='waitress', host='0.0.0.0', port=8080)

# With Gevent (async)
app.run(server='gevent', host='0.0.0.0', port=8080)

# Or use gunicorn directly from CLI:
# gunicorn -w 4 -b 0.0.0.0:8080 app:app

Server Adapters

Lcore supports 20+ server adapters out of the box. Use server='auto' to automatically pick the best available:

app.run(server='auto', host='0.0.0.0', port=8080)
Warning

The default wsgiref server is single-threaded and intended for development only. Always use a production server like Gunicorn or Waitress for deployment.

Hot Reload

from lcore import WatchdogReloader

# Watchdog-based (efficient, requires watchdog package)
reloader = WatchdogReloader(app, paths=['./'], interval=1)
reloader.start()

# Or use the built-in reloader flag
app.run(reloader=True)

Command-Line Interface

Lcore can be run directly as a Python module with command-line options:

# Basic usage
python -m lcore myapp

# With options
python -m lcore myapp -b 0.0.0.0:8080 -s gunicorn --debug --reload

CLI Options

FlagDescription
--versionShow Lcore version
-b, --bind ADDRESSBind address (host:port)
-s, --server NAMEWSGI server backend (default: wsgiref)
-p, --plugin MODULELoad a plugin module
-c, --conf FILELoad configuration from a config file
-C, --param KEY=VALUEOverride a config parameter
--debugEnable debug mode
--reloadAuto-reload on file changes
--docsOpen documentation website in browser

The CLI also accepts .py file paths directly:

# All of these work
python -m lcore app
python -m lcore app.py
python -m lcore app:app
python -m lcore mypackage.app:application

Built-in API Documentation

When running in debug mode, Lcore automatically serves interactive API documentation at /docs, similar to FastAPI:

from lcore import Lcore

app = Lcore()

@app.route('/users/<id:int>')
def get_user(id):
    '''Fetch a user by ID.'''
    return {'id': id}

app.run(port=8080, debug=True)
# /docs and /docs/json are now available

Visit http://localhost:8080/docs to see the auto-generated API reference, or /docs/json for the raw JSON data.

Production

API docs are only auto-enabled in debug mode. In production, call app.enable_docs() explicitly if you want to expose them.

Project Structure

Simple Microservice

my-service/
  app.py          # Main application
  lcore.py        # Framework (single file)
  config.json     # Configuration
  .env            # Environment variables
  tests/
    test_app.py   # Tests

Multi-Module Service

my-service/
  app.py              # App setup & middleware
  config.py           # Configuration loading
  lcore.py            # Framework
  modules/
    __init__.py
    users.py          # Users sub-app (mounted)
    products.py       # Products sub-app (mounted)
    auth.py           # Auth middleware
  templates/
    error.html
  tests/
    test_users.py
    test_products.py

With module mounting:

# app.py
from lcore import Lcore
from modules.users import users_app
from modules.products import products_app

app = Lcore()
app.mount('/api/users/', users_app)
app.mount('/api/products/', products_app)

app.run(server='gunicorn', port=8080)

# modules/users.py
from lcore import Lcore, request

users_app = Lcore()

@users_app.route('/')
def list_users():
    return {'users': []}

@users_app.route('/<id:int>')
def get_user(id):
    return {'id': id}