- Move ItemInfo to services/types.rs for sharing between client and server
- Replace .expect() in compression_service with proper error handling
- Add CoreError::PayloadTooLarge variant for semantic error handling
- Export CoreError from lib.rs for library users
- Unify get_item_meta_name/value to take &str instead of String
- Extract item_path() helper in ItemService to reduce duplication
- Add warning logs for silent errors in list.rs
- Fix pre-existing borrow errors: tx moved in export handler,
item_with_meta partial move in TryFrom implementation
- Fix unused data_dir variables in server code
Schema changes:
- Rename items.size to items.uncompressed_size for clarity
- Add compressed_size (INTEGER NULL) - tracks compressed file size on disk
- Add closed (BOOLEAN NOT NULL DEFAULT 1) - tracks whether item is fully written
- Existing items default to closed=true via migration
Lifecycle:
- Items created with closed=false, set to true on successful save/import
- Compressed size captured via fs::metadata() after compression writer closes
- Truncated uploads (413) get compressed_size set, closed=true, uncompressed_size=None
- Update command now backfills both uncompressed_size and compressed_size
Also includes bug fixes and dedup from prior review:
- Fix stream_raw_content_response using uncompressed_size for raw byte Content-Length
- ApiResponse::ok()/empty() constructors, TryFrom<ItemWithMeta> for ItemInfo
- tag_names() method on ItemWithMeta, meta_filter() on Settings
- Fix .unwrap() panics in compression engine Read/Write impls
- Fix TOCTOU race in stream_raw_content_response (now uses compressed_size)
- Fix swallowed write errors in meta plugins (digest, magic_file, exec)
- Fix term::stderr().unwrap() panic in item_service
- Deduplicate ItemService::new() calls across 20 API handlers
- ImportMeta supports #[serde(alias = "size")] for backward compat
All 75 tests, 67 doc tests pass. Clippy clean.
- Add streaming tar-based export (--export produces .keep.tar)
- Add streaming tar import (--import reads .keep.tar archives)
- Add server endpoints GET /api/export and POST /api/import
- Rename CompressionType::None to CompressionType::Raw with "none" as alias
- Add DB migration to update existing "none" compression values to "raw"
- Fix export endpoint to propagate errors to client instead of swallowing
- Fix import endpoint to return 413 on max_body_size instead of truncating
Export streams items as tar archives without loading entire files into memory.
Import extracts items with new IDs, preserving original order. Both work
locally and via client/server mode.
Co-Authored-By: opencode <noreply@opencode.ai>
- Local and client/server modes now support ID-based filtering
- keep -l 1 2 3 lists specific items by ID
- keep -l --ids-only 1 2 3 outputs just those IDs
- Server API adds optional 'ids' query parameter to GET /api/item/
- KeepClient.list_items gains ids parameter
Export/import:
- Add --export and --import modes for both local and client paths
- Use strfmt crate for --export-filename-format templates ({id}, {tags}, {ts}, {compression})
- Import preserves original timestamps via server ?ts= param
- --import-data-file for file-based import; stdin fallback streams with PIPESIZE buffers
Service unification:
- Merge SyncDataService unique methods into ItemService (delete_item now returns Result<Item>)
- Delete AsyncDataService, AsyncItemService, DataService trait (dead code / async-blocking anti-pattern)
- All server handlers use spawn_blocking + ItemService directly
- Extract shared types (ExportMeta, ImportMeta) and helpers (resolve_item_id(s), check_binary_tty)
Binary detection fix:
- Replace broken metadata.get("map") + is_binary(&[]) with actual content sampling
- Both as_meta and allow_binary paths read PIPESIZE sample before deciding
- Never load entire item into memory for binary check
Other fixes:
- Fix lock consistency: all handlers use blocking_lock() in spawn_blocking (no mixed lock().await)
- Use ISO 8601 format for {ts} in export filenames
- Fix resolve_item_ids returning only 1 item for tag lookups
- Fix client get.rs triple-buffering and export.rs whole-file buffering
- Add KeepClient::get_item_content_stream() for streaming reads
- Pass all clippy --features server lints (Path vs PathBuf, &mut conn, etc.)
- Client save now logs 'New item: {id}' immediately after server response
- Compression type sent as query param, stored in DB compression field (not _client_compression metadata)
- Client set_item_size() sends uncompressed size via POST /api/item/{id}/update?size=N
- Server raw content GET uses actual file size for Content-Length (not uncompressed item.size)
- Removed _client_compression metadata hack from client save and get
- Fixed server handle_update_item to support size-only updates
- Fixed clippy: collapsible_if, too_many_arguments, unnecessary mut refs
- Fixed ListItemsQuery doctest missing meta field
- Include error field in get_status() ApiResponse so server error
messages are surfaced instead of generic 'No status data returned'
- Use trim_lines_end() on table output to match local mode formatting
Change client get_status() to return StatusInfo struct instead of
serde_json::Value, then render paths, meta plugins, and compression
tables matching the local mode's output style.
- URL-encode all query parameter keys and values in get_json_with_query
and post_stream. Previously raw JSON like {"project":"alpha"} was
sent unencoded, causing 'invalid uri character' errors.
- Pass settings.meta (key=value pairs) from client save to server as
metadata. Previously always passed empty HashMap, so --meta was
silently ignored in client save mode.
Plumb metadata filter from client CLI through the HTTP API to the
server's data_service.list_items(). The server accepts a JSON-encoded
meta query parameter where null values mean 'key exists' and string
values mean 'exact match'.
Also fix LZ4 compression round-trip for client mode:
- Explicit flush FrameEncoder before drop to avoid sending only the
frame header when compress=false
- Send _client_compression metadata so client knows actual compression
on retrieval (server records compression=None when compress=false)
- Use FrameDecoder (frame format) instead of decompress_size_prepended
(size-prepended format) to match server storage format
Add server-side JWT authentication with permission-based access control
(read/write/delete claims). Password authentication now uses HTTP Basic
auth only (replacing Bearer). Add configurable username for both server
and client (--server-username/--client-username, defaults to "keep").
JWT secret supports file-based loading via --server-jwt-secret-file for
Docker secrets. OPTIONS preflight requests bypass auth. HEAD mapped to
read permission.
Co-Authored-By: opencode <noreply@opencode.ai>
Add client mode enabling the keep CLI to connect to a remote keep
server over HTTP. Local plugins (compression, meta, filters) run on
the client; the server stores/retrieves binary blobs.
Architecture:
- Client save uses 3-thread streaming pipeline: reader thread (stdin
→ tee/stdout → hash → compress), OS pipe, streamer thread (pipe →
chunked HTTP POST). Memory usage is O(PIPESIZE) regardless of data
size.
- Server accepts compress=false, meta=false, decompress=false query
params for granular control of server-side processing.
- Streaming body handling on server via async channel → sync reader
bridge (ChannelReader).
Key additions:
- src/client.rs: KeepClient with post_stream() for chunked upload
- src/modes/client/: save, get, list, info, delete, diff, status
- --client-url / KEEP_CLIENT_URL configuration
- --client-password / KEEP_CLIENT_PASSWORD for auth
- os_pipe dependency for zero-copy pipe streaming
Co-Authored-By: andrew/openrouter/hunter-alpha <noreply@opencode.ai>