- Move ItemInfo to services/types.rs for sharing between client and server
- Replace .expect() in compression_service with proper error handling
- Add CoreError::PayloadTooLarge variant for semantic error handling
- Export CoreError from lib.rs for library users
- Unify get_item_meta_name/value to take &str instead of String
- Extract item_path() helper in ItemService to reduce duplication
- Add warning logs for silent errors in list.rs
- Fix pre-existing borrow errors: tx moved in export handler,
item_with_meta partial move in TryFrom implementation
- Fix unused data_dir variables in server code
- Add streaming tar-based export (--export produces .keep.tar)
- Add streaming tar import (--import reads .keep.tar archives)
- Add server endpoints GET /api/export and POST /api/import
- Rename CompressionType::None to CompressionType::Raw with "none" as alias
- Add DB migration to update existing "none" compression values to "raw"
- Fix export endpoint to propagate errors to client instead of swallowing
- Fix import endpoint to return 413 on max_body_size instead of truncating
Export streams items as tar archives without loading entire files into memory.
Import extracts items with new IDs, preserving original order. Both work
locally and via client/server mode.
Co-Authored-By: opencode <noreply@opencode.ai>
Export/import:
- Add --export and --import modes for both local and client paths
- Use strfmt crate for --export-filename-format templates ({id}, {tags}, {ts}, {compression})
- Import preserves original timestamps via server ?ts= param
- --import-data-file for file-based import; stdin fallback streams with PIPESIZE buffers
Service unification:
- Merge SyncDataService unique methods into ItemService (delete_item now returns Result<Item>)
- Delete AsyncDataService, AsyncItemService, DataService trait (dead code / async-blocking anti-pattern)
- All server handlers use spawn_blocking + ItemService directly
- Extract shared types (ExportMeta, ImportMeta) and helpers (resolve_item_id(s), check_binary_tty)
Binary detection fix:
- Replace broken metadata.get("map") + is_binary(&[]) with actual content sampling
- Both as_meta and allow_binary paths read PIPESIZE sample before deciding
- Never load entire item into memory for binary check
Other fixes:
- Fix lock consistency: all handlers use blocking_lock() in spawn_blocking (no mixed lock().await)
- Use ISO 8601 format for {ts} in export filenames
- Fix resolve_item_ids returning only 1 item for tag lookups
- Fix client get.rs triple-buffering and export.rs whole-file buffering
- Add KeepClient::get_item_content_stream() for streaming reads
- Pass all clippy --features server lints (Path vs PathBuf, &mut conn, etc.)
- server_compress was true when compression_type=None, telling server to
recompress with its default (lz4) instead of storing raw
- compression_type query param was only sent when !server_compress,
so 'none' was never sent to server
- Fix: server_compress always false in client mode (client handles all
compression), compression_type always sent to server
Tested: save/get/list/info/filters/delete for lz4, none, gzip on both
local and client/server modes. All operations produce matching results.
- Client save now logs 'New item: {id}' immediately after server response
- Compression type sent as query param, stored in DB compression field (not _client_compression metadata)
- Client set_item_size() sends uncompressed size via POST /api/item/{id}/update?size=N
- Server raw content GET uses actual file size for Content-Length (not uncompressed item.size)
- Removed _client_compression metadata hack from client save and get
- Fixed server handle_update_item to support size-only updates
- Fixed clippy: collapsible_if, too_many_arguments, unnecessary mut refs
- Fixed ListItemsQuery doctest missing meta field
- Add SaveMetaFn callback pattern: meta plugins receive a closure instead of
&Connection, enabling the same plugin code to work in local, client, and
server contexts (collect-to-Vec, collect-to-HashMap, or direct DB write)
- Client save now runs meta plugins locally during streaming (smart client
sets meta=false, server skips its own plugins)
- Add POST /api/item/{id}/update endpoint for re-running plugins on stored
content without downloading compressed data
- Add client update mode (--update with --meta-plugin flags)
- Extract shared utilities: stream_copy, print_serialized, build_path_table,
ensure_default_tag to reduce duplication across modes
- Add upsert_tag for idempotent tag addition (INSERT OR IGNORE)
- Add warn logging on save_meta lock failure in BaseMetaPlugin and MetaService
Plumb metadata filter from client CLI through the HTTP API to the
server's data_service.list_items(). The server accepts a JSON-encoded
meta query parameter where null values mean 'key exists' and string
values mean 'exact match'.
Also fix LZ4 compression round-trip for client mode:
- Explicit flush FrameEncoder before drop to avoid sending only the
frame header when compress=false
- Send _client_compression metadata so client knows actual compression
on retrieval (server records compression=None when compress=false)
- Use FrameDecoder (frame format) instead of decompress_size_prepended
(size-prepended format) to match server storage format
Major overhaul of server architecture and security posture:
- Streaming: Unified all I/O through PIPESIZE (8192-byte) buffers.
POST bodies stream via MpscReader through the save pipeline. GET
content streams from disk via decompression to client. Removed
save_item_with_reader, get_item_content_info, ChannelReader.
413 responses keep partial items (nonfatal by design).
- Security: XSS protection in all HTML pages via html_escape crate.
Security headers middleware (nosniff, frame deny, referrer policy).
CORS tightened to explicit headers. Input validation for tags
(256 chars), metadata (128/4096), pagination (10k cap). Config
file reads use from_utf8_lossy. Generic error messages in HTML.
Diff endpoint has 10 MB per-item cap. max_body_size config option.
- Panics eliminated: Path unwraps → proper error propagation.
Mutex unwraps → map_err (registries) / expect with message (local).
- MCP removed: Deleted all MCP code, rmcp dependency, mcp feature.
- Docs: Updated README, DESIGN, AGENTS to reflect all changes.
Add server-side JWT authentication with permission-based access control
(read/write/delete claims). Password authentication now uses HTTP Basic
auth only (replacing Bearer). Add configurable username for both server
and client (--server-username/--client-username, defaults to "keep").
JWT secret supports file-based loading via --server-jwt-secret-file for
Docker secrets. OPTIONS preflight requests bypass auth. HEAD mapped to
read permission.
Co-Authored-By: opencode <noreply@opencode.ai>
Add client mode enabling the keep CLI to connect to a remote keep
server over HTTP. Local plugins (compression, meta, filters) run on
the client; the server stores/retrieves binary blobs.
Architecture:
- Client save uses 3-thread streaming pipeline: reader thread (stdin
→ tee/stdout → hash → compress), OS pipe, streamer thread (pipe →
chunked HTTP POST). Memory usage is O(PIPESIZE) regardless of data
size.
- Server accepts compress=false, meta=false, decompress=false query
params for granular control of server-side processing.
- Streaming body handling on server via async channel → sync reader
bridge (ChannelReader).
Key additions:
- src/client.rs: KeepClient with post_stream() for chunked upload
- src/modes/client/: save, get, list, info, delete, diff, status
- --client-url / KEEP_CLIENT_URL configuration
- --client-password / KEEP_CLIENT_PASSWORD for auth
- os_pipe dependency for zero-copy pipe streaming
Co-Authored-By: andrew/openrouter/hunter-alpha <noreply@opencode.ai>