SPEAK YOUR MIND
SHIP YOUR CODE

$_

Natural language to production code in seconds. Latest 2025 AI models. Zero configuration.

NO.API.KEYS.REQUIRED
20+.AI.MODELS.ACTIVE
ZERO.CONFIGURATION

COMMAND MATRIX

Execute powerful operations with simple commands

$ /code create REST API

Generate complete REST API with authentication

OUTPUT: Creates routes, models, middleware, validation, and tests
$ /image generate logo design

AI-powered image generation and editing

OUTPUT: Professional graphics, logos, UI mockups with latest AI models
$ /video create demo animation

Generate and edit video content

OUTPUT: Product demos, animations, presentations with AI video tools
$ /l2r explain machine learning

Interactive learning and knowledge acquisition

OUTPUT: Structured tutorials, examples, and hands-on practice
$ /code fix errors

Auto-fix TypeScript and linting errors

OUTPUT: Analyzes errors and applies intelligent fixes
$ /deploy to production

One-command deployment pipeline

OUTPUT: CI/CD, Docker, Kubernetes, cloud platforms
MARIA.TERMINAL.v3.6.0
$ maria /code create authentication system
[ANALYZING] Project structure detected: Node.js + Express
[PLANNING] Generating authentication architecture...
[CREATING] JWT token service...
[CREATING] User model with bcrypt hashing...
[CREATING] Auth middleware with role-based access...
[CREATING] OAuth2.0 integration endpoints...
[TESTING] Running security audit...
✓ COMPLETE: 8 files created, 342 lines of code generated
✓ SECURITY: All OWASP Top 10 checks passed
✓ READY: Authentication system deployed

SYSTEM CAPABILITIES

Advanced features for the modern developer

10K+ templates

CODE.GENERATION

Natural language to production code in seconds

DALL-E, Midjourney

IMAGE.GENERATION

AI-powered graphics, logos, and visual content

AI Video Models

VIDEO.CREATION

Generate demos, animations, and presentations

/l2r command

LEARNING.ENGINE

Interactive tutorials and knowledge acquisition

20+ models

AI.MODELS

GPT-5, Claude Opus 4.1, Gemini 2.5, Grok 4

SOC2/HIPAA

SECURITY.MATRIX

Military-grade encryption and compliance

AI NEURAL NETWORK

Latest August 2025 AI Models - GPT-5, Claude Opus 4.1, Gemini 2.5 Pro, Grok 4

GPT-5

[ONLINE]

Latency: 45ms

GPT-5-MINI

[ONLINE]

Latency: 28ms

CLAUDE-OPUS-4.1

[ONLINE]

Latency: 52ms

CLAUDE-SONNET-4

[ONLINE]

Latency: 38ms

GEMINI-2.5-PRO

[ONLINE]

Latency: 41ms

GEMINI-2.5-FLASH

[ONLINE]

Latency: 25ms

GEMINI-2.5-IMAGE

[ONLINE]

Latency: 47ms

GROK-4

[ONLINE]

Latency: 89ms

INSTALLATION.PROTOCOL

Cross-platform support for Windows, macOS, and Linux

QUICK.START

Global Installation
$ npm install -g @bonginkan/maria
Start MARIA
$ maria
Check Version
$ maria --version

ADVANCED.OPERATIONS

Update to Latest
$ npm update -g @bonginkan/maria
Force Reinstall
$ npm install -g @bonginkan/maria --force
Uninstall
$ npm uninstall -g @bonginkan/maria

PLATFORM.SUPPORT

🪟 Windows✓ FULLY SUPPORTED
🍎 macOS✓ FULLY SUPPORTED
🐧 Linux✓ FULLY SUPPORTED
Node.js Requirement≥ 20.10.0

TROUBLESHOOTING

EACCES Permission Error (Common with Homebrew)?

sudo chown -R $(whoami):admin /opt/homebrew/lib/node_modules/@bonginkan/maria
Then run: npm install -g @bonginkan/maria@latest

Alternative: Use sudo (Quick Fix)

sudo npm install -g @bonginkan/maria

Fix npm permissions (recommended):

npm config set prefix ~/.npm-global

Add to PATH (Linux/Mac):

export PATH=~/.npm-global/bin:$PATH

✅ After Installation

🏠 From Home Directory:

cd ~/ && maria /help

📁 From Any Project:

cd /path/to/project && maria /code create component

💻 VS Code Terminal:

maria --version

🔄 Check Alias:

which maria

⚠️ If you see "aliased", you might be using a local development version instead of global

Need help? Check the documentation or join our GitHub community

LOCAL LLM INFRASTRUCTURE

Run powerful models on your own hardware - No cloud dependency

OLLAMA

  • • Llama 3.2 (3B, 70B)
  • • Mistral (7B, 8x7B)
  • • DeepSeek Coder
  • • Phi-3
  • • Auto model management

LM STUDIO

  • • GPT-OSS-120B
  • • GPT-OSS-20B
  • • GUI model management
  • • GPU acceleration
  • • OpenAI-compatible API

vLLM

  • • High-throughput serving
  • • Tensor parallelism
  • • Continuous batching
  • • PagedAttention
  • • Production-ready

Full compatibility with local inference servers • Zero-config integration • Automatic fallback

PRICING MATRIX

Choose your development acceleration tier

🎉 FREE TIER CAMPAIGN ACTIVE

Currently only FREE plan is available for immediate activation with Google login.
Pro plans (Starter/Pro/Ultra) are accepting waitlist registrations.

Free

Perfect for getting started

Free
300 requests/month
Available models: 3 models
/code commands 40/month
File analysis 5/month
Community support
Popular
Coming Soon

Starter

For individual developers

$20/month
1,400 requests/month
Available models: 4 models
/code commands 300/month
File analysis 50/month
Community support
Coming Soon

Pro

For serious developers

$39/month
5,000 requests/month
Available models: 6 models+ Local LLM
/code commands 1200/month
File analysis 200/month
Community support
Coming Soon

Ultra

For power users and teams

$99/month
10,000 requests/month
Available models: 12 models + Local LLM
/code commands 5000/month
File analysis 500/month
Priority support
5-user license

All plans include web dashboard, API access, and usage analytics

CODE AT THE SPEED OF THOUGHT

Transform ideas into production-ready code instantly.

NO.CREDIT.CARD // INSTANT.ACCESS // FREE.TIER.AVAILABLE