Natively is a freeprivacy-first AI Copilot for Google MeetZoomand Teams. It serves as an open-source alternative to Cluelyproviding real-time transcriptioninterview assistanceand automated meeting notes completely locally.
Unlike cloud-only toolsNatively uses Local RAG (Retrieval Augmented Generation) to remember past conversationsgiving you instant answers during technical interviewssales callsand daily standups.
While other tools focus on being "lightweight" wrappersNatively is a complete intelligence system.
- Local Vector Database (RAG): We embed your meetings locally so you can ask"What did John say about the API last week?"
- Rich Dashboard: A full UI to managesearchand export your history—not just a floating window.
- Rolling Context: We don't just transcribe; we maintain a "memory window" of the conversation for smarter answers.
This demo shows a complete live meeting scenario:
- Real-time transcription as the meeting happens
- Rolling context awareness across multiple speakers
- Screenshot analysis of shared slides
- Instant generation of what to say next
- Follow-up questions and concise responses
- All happening livewithout recording or post-processing
Note
macOS Users:
-
"Unidentified Developer": If you see thisRight-click the app > Select Open > Click Open.
-
"App is Damaged": If you see thisrun the command in Terminal based on your download:
For .zip downloads:
xattr -cr /Applications/Natively.app
For .dmg downloads:
- Open Terminal and run:
xattr -cr ~/Downloads/Natively-1.1.7-arm64.dmg - Install the natively.dmg
- Open Terminal and run:
xattr -cr /Applications/Natively.app
- Open Terminal and run:
- Premium Profile Intelligence: Context awareness with Job Description (JD) and Resume integrationadvanced company researchand real-time negotiation assistance.
- Live Meeting RAG: Instantintelligent retrieval of context directly during live meetings using local vector embeddings.
- Soniox Speech Provider: First-class support for ultra-fasthigh-accuracy streaming Speech-to-Text with Sonioxjoining GoogleGroqOpenAIDeepgramElevenLabsAzureand IBM Watson.
- Multilingual & Accent Support: Full control over AI response language and highly specific speech recognition for various accents and dialects.
- Stability & Fixes: Resolved numerous issues and merged 3 community pull requests for enhanced performance.
- Why Natively?
- Privacy & Security
- Quick Start (End Users)
- Installation (Developers)
- AI Providers
- Key Features
- Meeting Intelligence Dashboard
- Use Cases
- Comparison
- FAQ
- Architecture Overview
- Technical Details
- Known Limitations
- Responsible Use
- Contributing
- License
Natively is a desktop AI assistant for live situations:
- Meetings
- Interviews
- Presentations
- Classes
- Professional conversations
It provides:
- Live answers
- Rolling conversational context
- Screenshot and document understanding
- Real-time speech-to-text
- Instant suggestions for what to say next
All while remaining invisiblefastand privacy-first.
- 100% open source (AGPL-3.0)
- Bring Your Own Keys (BYOK)
- Local AI option (Ollama)
- All data stored locally
- No telemetry
- No tracking
- No hidden uploads
You explicitly control:
- What runs locally
- What uses cloud AI
- Which providers are enabled
- Node. (v20+ recommended)
- Git
- Rust (required for native audio capture)
Natively is 100% free to use with your own keys.
Connect any speech provider and any LLM. No subscriptionsno markupsno hidden fees. All keys are stored locally.
- Soniox (API Key)
- Google Cloud Speech-to-Text (Service Account)
- Groq (API Key)
- OpenAI Whisper (API Key)
- Deepgram (API Key)
- ElevenLabs (API Key)
- Azure Speech Services (API Key + Region)
- IBM Watson (API Key + Region)
Connect Natively to any leading model or local inference engine.
| Provider | Best For |
|---|---|
| Gemini 3 Pro/Flash | Recommended: Massive context window (2M tokens) & low cost. |
| OpenAI (GPT-5.2) | High reasoning capabilities. |
| Anthropic (Claude 4.5) | Coding & complex nuanced tasks. |
| Groq / Llama 3 | insane speed (near-instant answers). |
| Ollama / LocalAI | 100% Offline & Private (No API keys needed). |
| OpenAI-Compatible | Connect to any custom endpoint (vLLMLM Studioetc.) |
Note: You only need ONE speech provider to get started. We recommend Google STT ,Groq or Deepgram for the fastest real-time performance.
Your credentials:
- Never leave your machine
- Are not loggedproxiedor stored remotely
- Are used only locally by the app
What You Need:
- Google Cloud account
- Billing enabled
- Speech-to-Text API enabled
- Service Account JSON key
Setup Summary:
- Create or select a Google Cloud project
- Enable Speech-to-Text API
- Create a Service Account
- Assign role:
roles/speech.client - Generate and download a JSON key
- Point Natively to the JSON file in settings
git clone https://github.com/evinjohnn/natively-cluely-ai-assistant.git
cd natively-cluely-ai-assistantnpm installCreate a .env file:
# Cloud AI
GEMINI_API_KEY=your_key
GROQ_API_KEY=your_key
OPENAI_API_KEY=your_key
CLAUDE_API_KEY=your_key
GOOGLE_APPLICATION_CREDENTIALS=/absolute/path/to/service-account.on
# Speech Providers (Optional - only one needed)
DEEPGRAM_API_KEY=your_key
ELEVENLABS_API_KEY=your_key
AZURE_SPEECH_KEY=your_key
AZURE_SPEECH_REGION=eastus
IBM_WATSON_API_KEY=your_key
IBM_WATSON_REGION=us-south
# Local AI (Ollama)
USE_OLLAMA=true
OLLAMA_MODEL=llama3.2
OLLAMA_URL=http://localhost:11434
# Default Model Configuration
DEFAULT_MODEL=gemini-3-flash-previewnpm startnpm run dist- Custom (BYO Endpoint): Paste any cURL command to use OpenRouterDeepSeekor private endpoints.
- Ollama (Local): Zero-setup detection of local models (Llama 3MistralGemma).
- Google Gemini: First-class support for Gemini 3.0 Pro/Flash.
- OpenAI: GPT-5.2 support with optimized system prompts.
- Anthropic: Claude 4.5 Sonnet support for complex reasoning.
- Groq: Ultra-fast inference with Llama 3 models.
- Always-on-top translucent overlay
- Instantly hide/show with shortcuts
- Works across all applications
- Real-time speech-to-text
- Context-aware Memory (RAG) for Past Meetings
- Instant answers as questions are asked
- Smart recap and summaries
- Capture any screen content
- Analyze slidesdocumentscodeor problems
- Immediate explanations and solutions
- Job Description & Resume Context: Natively understands your background and the role you're applying for to provide highly tailoredcontext-aware answers.
- Company Research: Get instant intelligence and dossiers on the company you are interviewing with.
- Negotiation Assistance: Real-time guidance and strategy during offer and salary negotiations.
- What should I answer?
- Shorten response
- Recap conversation
- Suggest follow-up questions
- Manual or voice-triggered prompts
Natively understands that listening to a meeting and talking to an AI are different tasks. We treat them separately:
- System Audio (The Meeting): Captures high-fidelity audio directly from your OS (ZoomTeamsMeet). It "hears" what your colleagues are saying without interference from your room noise.
- Microphone Input (Your Voice): A dedicated channel for your voice commands and dictation. Toggle it instantly to ask Natively a private question without muting your meeting software.
- Global activation shortcut
- Instant answer overlay
- Upcoming meeting readiness
- Full Offline RAG: All vector embeddings and retrieval happen locally (SQLite).
- Live Meeting RAG: Instantintelligent retrieval of context directly during live meetings using local vector embeddings.
- Semantic Search: innovative "Smart Scope" detects if you are asking about the current meeting or a past one.
- Global Knowledge: Ask questions across all your past meetings ("What did we decide about the API last month?").
- Automatic Indexing: Meetings are automatically chunkedembeddedand indexed in the background.
- Ultra-fast STT: First-class support for streaming Speech-to-Text with Soniox.
- Accent & Dialect Control: Set speech recognition specific to accentsdialectsand varied AI response languages.
Seamlessly routes queries between ultra-fast models for instant speed and reasoning models (GeminiOpenAIClaude) for complex tasks.
- Undetectable Mode: Instantly hide from dock/taskbar.
- Masquerading: Disguise process names and window titles as harmless system utilities.
- Local-Only Processing: All data stays on your machine.
Natively includes a powerfullocal-first meeting management system to reviewsearchand manage your entire conversation history.
- Meeting Archives: Access full transcripts of every past meetingsearchable by keywords or dates.
- Smart Export: One-click export of transcripts and AI summaries to MarkdownJSONor Text—perfect for pasting into NotionObsidianor Slack.
- Usage Statistics: Track your token usage and API costs in real-time. Know exactly how much you are spending on GeminiOpenAIor Claude.
- Audio Separation: Distinct controls for System Audio (what they say) vs. Microphone (what you dictate).
- Session Management: Renameorganizeor delete past sessions to keep your workspace clean.
- Live Assistance: Get explanations for complex lecture topics in real-time.
- Translation: Instant language translation during international classes.
- Problem Solving: Immediate help with coding or mathematical problems.
- Interview Support: Context-aware prompts to help you navigate technical questions.
- Sales & Client Calls: Real-time clarification of technical specs or previous discussion points.
- Meeting Summaries: Automatically extract action items and core decisions.
- Code Insight: Explain unfamiliar blocks of code or logic on your screen.
- Debugging: Context-aware assistance for resolving logs or terminal errors.
- Architecture: Guidance on system design and integration patterns.
Natively is built on a simple promise: Any speech providerany API key100% free to useand universally compatible.
| Feature | Natively | Commercial Tools (Cluelyetc.) | Other OSS |
|---|---|---|---|
| Price | Free (BYOK) | $20 - $50 / month | Free |
| Speech Providers | Any (GoogleGroqDeepgrametc.) | Locked to Vendor | Limited |
| LLM Choice | Any (Local or Cloud) | Locked to Vendor | Limited |
| Privacy | Local-First & Private | Data stored on servers | Depends |
| Latency | Real-Time (<500ms) | Variable | Often Slow |
| Universal Mode | Works over ANY app | often limited to browser | No |
| Meeting History | Full Dashboard & Search | Limited | None |
| Data Export | JSON / Markdown / Text | Proprietary Format | None |
| Audio Channels | Dual (System + Mic) | Single Stream | Single Stream |
| Screenshot Analysis | Yes (Native) | Limited | Rare |
| Stealth Mode | Yes (Undetectable) | No | No |
Natively processes audioscreen contextand user input locallymaintains a rolling context windowand sends only the required prompt data to the selected AI provider (local or cloud).
No raw audioscreenshotsor transcripts are stored or transmitted unless explicitly enabled by the user.
- ReactViteTypeScriptTailwindCSS
- Electron
- Rust (native audio)
- SQLite (local storage)
- Gemini 3 (Flash / Pro)
- OpenAI (GPT-5.2)
- Claude (Sonnet 4.5)
- Ollama (LlamaMistralCodeLlama)
- Groq (LlamaMixtral)
- Minimum: 4GB RAM
- Recommended: 8GB+ RAM
- Optimal: 16GB+ RAM for local AI
Natively is intended for:
- Learning
- Productivity
- Accessibility
- Professional assistance
Users are responsible for complying with:
- Workplace policies
- Academic rules
- Local laws and regulations
This project does not encourage misuse or deception.
- Linux support is limited and looking for maintainers
Contributions are welcome:
- Bug fixes
- Feature improvements
- Documentation
- UI/UX enhancements
- New AI integrations
Quality pull requests will be reviewed and merged.
Licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
If you run or modify this software over a networkyou must provide the full source code under the same license.
This repository contains the open-source core of the project.
Some features available in official releases are part of the commercial Premium Edition and are not included in this repository.
Note: This project is available for sponsorshipsadsor partnerships – perfect for companies in the AIproductivityor developer tools space.
Star this repo if Natively helps you succeed in meetingsinterviewsor presentations!
Yes. Natively is an open-source project. You only pay for what you use by bringing your own API keys (GeminiOpenAIAnthropicetc.)or use it 100% free by connecting to a local Ollama instance.
Yes. Natively uses a Rust-based system audio capture that works universally across any desktop applicationincluding ZoomMicrosoft TeamsGoogle MeetSlackand Discord.
Natively is built on Privacy-by-Design. All transcriptsvector embeddings (Local RAG)and keys are stored locally on your machine. We have no backend and collect zero telemetry.
Natively is a powerful assistant for any professional situation. Howeverusers are responsible for complying with their company policies and interview guidelines.
Simply install Ollamarun a model (e.g.ollama run llama3)and Natively will automatically detect it. Enable "Ollama" in the AI Providers settings to switch to offline mode.
ai-assistant meeting-notes interview-helper presentation-support ollama gemini-ai electron-app cross-platform privacy-focused open-source local-ai screenshot-analysis academic-helper sales-assistant coding-companion cluely cluely alternative interview coder final round ai claude skills moltbot

