Articles by Raine

1 article found

Show HN: Anki-LLM – Bulk process and generate Anki flashcards with LLMs
Technology

Show HN: Anki-LLM – Bulk process and generate Anki flashcards with LLMs

A CLI toolkit for bulk-processing and generating Anki flashcards with LLMs. Bulk-verify translations – End-to-end pipeline for cleaning large decks. Add a Key Vocabulary field – Create a per-note field highlighting 1–3 key words with readings, meanings, and HTML context. Generate new cards – Interactively create multiple contextual flashcards for a vocabulary word or concept from a single command. Scriptable collection access – Query AnkiConnect directly from the CLI or AI agents. Command reference Manually editing hundreds or thousands of Anki cards is tedious, error-prone, and time-consuming. Whether it's verifying translations, adding grammar notes, or generating contextual examples, doing it by hand doesn't scale. anki-llm provides a bridge between your Anki collection and modern AI models. Batch processing File-based: Export deck to file, process with LLM, import results back to Anki. Direct: Process and update notes in-place. Card generation Generate multiple contextual flashcard examples for a term, review interactively, and add selected cards to your deck. Batch processing workflows: File-based (with resume) or direct-to-Anki (one Export Anki decks to clean CSV or YAML files. Batch process note fields using OpenAI or Google Gemini models. Custom prompts: Use flexible template files to define exactly how the LLM should process your cards. Concurrent processing: Make multiple parallel API requests to speed up large jobs. Resilient: Automatically retries failed requests and saves progress incrementally (file mode). Automatic resume: Pick up where you left off if processing is interrupted (file mode). Copy mode: Alternatively, generate cards without API keys by pasting LLM responses from browser interfaces (ChatGPT, Claude, etc.). Install globally via npm: Node.js (v18 or higher) Anki Desktop must be running. The AnkiConnect add-on must be installed in Anki. anki-llm uses LLM APIs to process your notes. You need to configure an API key for the model provider you want to use. The tool supports two API providers: Pricing is per million tokens (M). Check the latest prices on the provider's website to be sure. Set the appropriate environment variable for your chosen model provider: For OpenAI models: Get your API key from: https://platform.openai.com/api-keys For Gemini models: Get your API key from: https://aistudio.google.com/api-keys Use anki-llm config to store defaults (for example, the model) so you don't have to repeat flags on every command. Config file lives at ~/.config/anki-llm/config.json. export - Export deck to file import - Import data to deck process-file - Process notes from file with AI process-deck - Process notes from deck with AI generate-init - Create prompt template for generate generate - Generate new cards for a term query - Query AnkiConnect API Exports notes from an Anki deck. : The name of the Anki deck to export (must be in quotes if it contains spaces). [output]: Optional output file path. If omitted, automatically generates a filename from the deck name (e.g., "My Deck" → my-deck.yaml). You can also provide just a file extension (e.g., .csv) to auto-generate the filename with your preferred format. Imports data from a file into an Anki deck. Existing notes (matched by key field) are updated, while new entries create new notes. : Path to the data file to import (CSV or YAML). Required options: -d, --deck: The name of the target Anki deck. Common options: -n, --note-type: The Anki note type to use when creating new notes. If not specified, it will be inferred from existing notes in the deck. -k, --key-field: Field to use for identifying existing notes. If not specified, auto-detects using this priority: (1) noteId column if present, (2) first field of the note type, (3) error if neither found. Batch-process notes from a CSV/YAML file using an LLM and user-defined prompts. This command saves the transformed results to an output file and features automatic resume, allowing it to safely skip completed notes if interrupted or : Input file path (CSV or YAML). Required options: -o, --output: Output file path (CSV or YAML). -p, --prompt: Path to the prompt template text file. Either --field or --json (mutually exclusive): --field : Update a single field with the AI response. --json: Expect JSON response and merge all fields into the note. See Understanding --field vs --json modes for more details. Common options: -m, --model: AI model to use (required unless set via config set model). -b, --batch-size: Number of concurrent API requests (default: 5). -r, --retries: Number of retries for failed requests (default: 3). -d, --dry-run: Preview the operation without making API calls (recommended for testing). -f, --force: Re-process all rows, ignoring existing output. --limit: Limit the number of new rows to process (useful for testing prompts on a small sample before processing large datasets). --require-result-tag: Only extracts content from within tags in the AI response. --log: Generate a log file with detailed debug information. --very-verbose: Log full LLM responses to the log file (automatically enables --log). Useful for debugging prompts and understanding model Export deck to file: anki-llm export "My Deck" notes.yaml Process file: anki-llm process-file notes.yaml -o output.yaml --field Translation -p prompt.txt -m gpt-4o-mini Import results: anki-llm import output.yaml -d "My Deck" Key features: ✅ Automatic resume: Skips already-processed notes ✅ Incremental saves: Progress saved continuously ✅ Review before import: You can inspect/edit the output file before When to use this command: This command provides a file-based workflow for batch processing notes. It is the primary alternative to the process-deck command, which modifies notes directly in your Anki collection. Use process-file instead of process-deck when you: Require a manual review step. The command outputs to a file, creating a safe staging area to inspect results before you commit them to your Anki deck. Need to process a large number of notes where interruptions are possible. Its resume capability ensures you don't lose progress if the process fails midway. Are operating in an environment without a running Anki instance. This command is fully self-contained and does not need to connect to the Anki application. Batch-process notes directly from an Anki deck using an LLM and user-defined prompts, updating them in-place. No intermediate files needed. This is faster and more convenient when you've tested your prompt and know the end result is safe to run. : Name of the Anki deck to process (must be in quotes if it contains Required options: -p, --prompt: Path to the prompt template text file. Either --field or --json (mutually exclusive): --field : Update a single field with the AI response. --json: Expect JSON response and merge all fields into the note. See Understanding --field vs --json modes for more details. Common options: -m, --model: AI model to use (required unless set via config set model). -b, --batch-size: Number of concurrent API requests (default: 5). -r, --retries: Number of retries for failed requests (default: 3). -d, --dry-run: Preview the operation without making API calls (recommended for testing). --limit: Limit the number of notes to process (useful for testing prompts on a small sample before processing entire deck). --require-result-tag: Only extracts content from within tags in the AI response. --log: Generate a log file with detailed debug information. --very-verbose: Log full LLM responses to the log file (automatically enables --log). Useful for debugging prompts and understanding model Prerequisites: Anki Desktop must be running AnkiConnect add-on must be installed Single command: anki-llm process-deck "My Deck" --field Translation -p prompt.txt -m gpt-4o-mini Key features: ✅ No intermediate files: Process and update in one step ✅ Batch updates: Efficient bulk updates to Anki ✅ Error logging: Failed notes logged to [deck-name]-errors.jsonl ❌ No resume support: Must complete in one run (use process-file for large datasets) Both process-file and process-deck support two response formats for the LLM: --field mode (single field update): The LLM response is saved to the specified field. anki-llm process-file notes.yaml -o out.yaml -p prompt.txt --field Translation --json mode (multi-field merge): The LLM must return valid JSON. All fields in the JSON are merged into your note. anki-llm process-file notes.yaml -o out.yaml -p prompt.txt --json Example: If your note has Japanese and Grammar fields, and the LLM "Japanese": "こんにちは", "Grammar": "greeting" Both fields will be updated. Only fields present in the JSON are updated (partial updates are allowed). If the response is not valid JSON, the operation will fail and retry. Interactively creates a prompt template file for the generate command. The wizard guides you through selecting a deck and note type, then uses an LLM to analyze your existing cards and generate a tailored prompt that matches your deck's style and formatting. This is the recommended way to get started with card generation. [output]: Optional output file path. If omitted, automatically generates a filename from the deck name. Common options: -m, --model: The LLM model to use for the smart prompt generation step (e.g., gemini-2.5-pro). -t, --temperature: Temperature for LLM generation (0.0-2.0, default varies by model). Lower values produce more consistent output. --copy: Copy the LLM prompt to clipboard and wait for manual response pasting. Useful when you don't have API access and want to use a browser LLM interface like ChatGPT. Run the wizard: anki-llm generate-init Follow the interactive steps to select a deck and note type. A prompt file (e.g., my-deck-prompt.md) is created for you. Review and customize the generated prompt file. Use the file with the generate command: anki-llm generate "term" -p my-deck-prompt.md Generates multiple new Anki card examples for a given term, lets you review and select which ones to keep, and adds them directly to your deck. : The word or phrase to generate cards for (must be in quotes if it contains spaces). Required options: -p, --prompt: Path to the prompt template file (created with generate-init). Common options: -c, --count: Number of card examples to generate (default: 3). -m, --model: AI model to use (defaults to gpt-5-mini or gemini-2.5-flash depending on your API key; can also be set via config set model). -d, --dry-run: Display generated cards without starting the interactive selection or import process. -r, --retries: Number of retries for failed requests (default: 3). -t, --temperature: LLM temperature, a value between 0 and 2 that controls creativity (default: 1.0). --max-tokens: Set a maximum number of tokens for the LLM response. -o, --output: Export cards to a file instead of importing to Anki (e.g., cards.yaml, cards.csv). --log: Enable logging of LLM responses to a file (useful for debugging). --copy: Copy the LLM prompt to clipboard and wait for manual response pasting. Useful when you don't have API access and want to use a browser LLM interface like ChatGPT. The --prompt file is a text or markdown file that contains two parts: YAML frontmatter for configuration and a prompt body with instructions for the LLM. Frontmatter (Required) The frontmatter is a YAML block at the top of the file enclosed by ---. deck: The target Anki deck name. noteType: The name of the Anki note type (model) to use. fieldMap: Maps the keys from the LLM's JSON output to your actual Anki field names. The LLM will be instructed to generate JSON with the keys on the left, and anki-llm will use them to populate the Anki fields on the right. Prompt Body The body contains your instructions for the LLM. It must: Include the {term} placeholder, which will be replaced by the you provide on the command line. Include the {count} placeholder, which will be replaced by the number of cards requested. Instruct the LLM to return a JSON array of objects, where each object represents one card and uses the keys defined in fieldMap. Include a "one-shot" example showing the exact JSON array structure and desired formatting (e.g., HTML for bolding or lists). Encourage the LLM to generate diverse cards that highlight different nuances, contexts, or usage examples of the term. Example Prompt File (japanese-vocab-prompt.md) The --copy flag allows you to generate cards without API keys by manually copying prompts to a browser-based LLM interface (like ChatGPT, Claude, Gemini, etc.) and pasting responses back. Run the command with --copy: anki-llm generate "今日" -p prompt.md --copy The program automatically copies the LLM prompt to your clipboard. Paste the prompt into your preferred LLM interface (ChatGPT, Claude, etc.). Copy the complete JSON response from the LLM. Paste it into the terminal. Type END on a new line and press Enter to submit. The program validates and processes your cards normally. No API key required Use any LLM interface you prefer Works with free-tier LLM services Full control over the LLM interaction Key features: ✅ Interactive selection: Review and choose which generated cards to keep. ✅ Duplicate detection: Automatically flags cards that may already exist in your deck. ✅ Export option: Save generated cards to YAML/CSV for review before ✅ Highly customizable: Full control over card generation via the prompt Query the AnkiConnect API directly with any supported action. This command is especially useful for AI agents (like Claude Code) to explore and interact with your Anki collection programmatically. : The AnkiConnect API action to perform (e.g., deckNames, findNotes, cardsInfo). [params]: Optional JSON string of parameters for the action. Why this is useful for AI agents: AI assistants can use this command to dynamically query your Anki collection without you having to manually provide information. For example: "List all my decks" → anki-llm query deckNames "Show me statistics for my Japanese deck" → anki-llm query getDeckStats '{"decks":["Japanese"]}' "Find all cards with tag 'vocabulary'" → anki-llm query findNotes '{"query":"tag:vocabulary"}' The command outputs clean JSON that AI agents can parse and reason about, making it easy to build custom workflows or answer questions about your Anki collection. Example: Sampling random cards from decks AI agents can use anki-llm query to discover information about your collection and then take action. Here's an example of Claude Code using the query command to sample random cards from multiple decks. Given the instruction: "Use anki-llm to pick random cards from Glossika decks, and print the English and Japanese fields for each, pick 10 cards from each deck, and save to a markdown file" This demonstrates how the query command enables AI agents to build custom scripts for data analysis and extraction tasks autonomously. Special actions: docs or help: Returns the complete AnkiConnect API documentation. This is especially useful for AI agents that need to understand what actions are available and how to use them. The agent can query this once to get the full documentation and then use that context to make informed decisions about which API calls to make. See ANKI_CONNECT.md for the complete list of available actions and their parameters. Let's say you have an Anki deck named "Japanese Core 1k" with 1000 notes. Each note has a Japanese field with a sentence and a Translation field with an English translation that you suspect is inaccurate. We'll use anki-llm and GPT-4o mini to generate better translations for all 1000 notes. First, export the notes from your Anki deck into a YAML file. YAML is great for multiline text fields and for using git diff to see what has changed after processing is complete. This command will connect to Anki, find all notes in that deck, and save them to a YAML file. The notes.yaml file will look something like this: Next, create a prompt file (prompt-ja-en.txt) to instruct the AI. Use {field_name} syntax for variables that will be replaced with data from each note. We want to process the Japanese field. File: prompt-ja-en.txt Now, run the process-file command. We'll tell it to use our notes.yaml file as input, write to a new notes-translated.yaml file, process the Translation field, and use our prompt template. The tool will read the Japanese field from each note to fill the prompt, then the AI's response will overwrite the Translation field. notes.yaml: The input file. --output notes-translated.yaml: The output file. --field Translation: The field we want the AI to generate and place its result into. --prompt prompt-ja-en.txt: Our instruction template. --model gemini-2.5-flash: The AI model to use. --batch-size 10: Process 10 notes concurrently for speed. --require-result-tag: Ensures the tool only saves the content inside the tag, ignoring the AI's analysis. You will see real-time progress as it processes the notes: The final step is to import the newly generated translations back into Anki. The tool uses the noteId to find and update the existing notes. notes-translated.yaml: The file with our improved translations. --deck "Japanese Core 1k": The destination deck. The model type will be automatically inferred from the existing notes in the deck. You can also explicitly specify it with --model "Japanese Model" if That's it! All 1000 notes in your Anki deck have now been updated with high-quality translations. Sentence flashcards often benefit from a focused vocabulary breakdown. You can use anki-llm to populate a dedicated Key Vocabulary field with structured HTML that spotlights the most important words in each sentence. Create a prompt that instructs the model to reason about the sentence, pick the top 1–3 items, and return clean HTML. This example assumes your notes have Japanese and English fields. You can start from the full prompt example in examples/key_vocabulary.md. File: prompt-key-vocab.txt Process your exported notes and overwrite the Key Vocabulary field with the HTML generated by the prompt: --field "Key Vocabulary": Updates that specific field on each note. --require-result-tag: Keeps only the HTML between tags and drops the analysis from the prompt. When you open the processed YAML/CSV, the generated field will look like this: After verifying the results, import the updated file back into Anki to add the structured vocabulary explanations to your cards. Let's create several new example flashcards for the Japanese word 会議 (meeting) and add them to our "Japanese::Vocabulary" deck. First, run the generate-init wizard. It will ask you to select your deck and note type, then use an LLM to analyze your existing cards and generate a prompt file tailored to your collection. Follow the interactive prompts. The wizard will use an AI model (defaults to gpt-5 or gemini-2.5-flash depending on your API key) to analyze existing cards in your deck and create a smart prompt that matches their style and formatting. When it's done, it will save a new file, for example japanese-vocabulary-prompt.md. The generated file will look something like this: File: japanese-vocabulary-prompt.md You can now edit this file to further refine the instructions for the AI. Now, use the generate command with your new prompt file to create card examples for 会議. The tool will make a single API call asking the LLM to generate 3 distinct cards and show progress: After generation, an interactive checklist appears in your terminal. You can use the arrow keys and spacebar to select the cards you want to add to Anki. Here, we see three options. Card 3 has been flagged as a potential duplicate because a similar card already exists in the deck. Let's select the first two cards and press Enter. The selected cards are immediately added to your Anki deck. That's it! You have successfully generated, reviewed, and imported multiple high-quality, contextual Anki cards from a single command. Use tsx to run the CLI directly from TypeScript source without rebuilding: Use pnpm link to test the command globally: Note: The linked command uses compiled JavaScript from dist/. Run pnpm run build after making changes to see them reflected. To unlink: pnpm unlink --global