⚑ 10,000+ Interactive Sessions

See It In Action β€” No Signup Required

Experience the future of interaction. Each demo below is fully interactive β€” click any card to explore real-world applications powered by eye tracking technology.

8
Live Demos
0s
Setup Time
Play
βš•οΈ Healthcare

Critical Alert System

Life-saving speed: Medical professionals respond to patient alerts instantly with gaze control. No hands neededβ€”just look and act. See how hospitals are revolutionizing patient care.

View Demo
Play
πŸ“§ Productivity

Professional Email

Never miss a message: Manage your inbox hands-free while multitasking. Perfect for busy professionals who need to stay connected without breaking focus.

View Demo
Play
πŸ“± Social Media

Video Player Experience

Next-level multitasking: Watch videos while managing messagesβ€”all with your eyes. Experience the future of social media interaction where your gaze is the remote.

View Demo
Play
πŸ’° Finance

Transaction Notifications

Bank-level security: Monitor financial transactions instantly with secure gaze-controlled alerts. Trusted by banking professionals for real-time account monitoring.

View Demo
Play
🎯 Multi-Use Case

Interactive Showcase

See it all in action: One demo, infinite possibilities. Explore healthcare, retail, gaming, and more. Discover why industry leaders choose eye tracking.

View Demo
Play
πŸ’¬ Communication

Video with Messages

True multitasking: Watch videos and respond to messages simultaneouslyβ€”without pausing or looking away. Your eyes control everything seamlessly.

View Demo
Play
🧭 Navigation

Advanced Navigation

Navigate at the speed of thought: Switch between dashboards, tabs, and controls instantly with precision eye tracking. Built for power users who demand efficiency.

View Demo

Pricing

Simple, transparent pricing. One-time purchase for builders. Enterprise licensing for studios and commercial use.

Enterprise
Coming Soon
Enterprise License
$30,000 /year

For AAA studios, entertainment companies, and commercial use

  • ● Everything in Desktop License
  • ● Full commercial use rights
  • ● Licensed trained model
  • ● Integration support and consulting
  • ● Priority 24/7 support
  • ● Custom model training
  • ● White-label options
  • ● 99.9% uptime SLA
  • ● Dedicated account manager
  • ● Volume discounts

Contact us for early access. We will notify you when it launches.

AI/LLM Training Data Collection

Your interaction data is used to train and improve our AI models

Data Collection and Usage

Storage Type

IndexedDB (via WebGazer) - Persistent browser storage for training data

Training Data

Unlimited: All clicks, calibration points, and interactions saved automatically

Learning Method

Regression model learns from 468 MediaPipe landmarks - continuous improvement

Eye Tracking Calibration and Training

Area Calibration Process

  • ●
    Grid-Based Calibration: Screen divided into grid sections with options ranging from 5x5 to 50x50
  • ●
    Stabilization Anchors: Each grid cell has its own stabilization anchor point for maximum accuracy
  • ●
    Facial Landmark Detection: Collects comprehensive facial features including face, eyes, iris, pupil, and mouth for each section
  • ●
    Multiple Samples: Users can click multiple times at the same cell for maximum accuracy. Each click adds training data.

Anchor Preferences

Hard Anchor

Strong stabilization at UI center. Training data focuses on focused work areas.

Soft Anchor

Gentle stabilization at UI elements. Captures dynamic interface interactions.

Static Anchors

Multiple fixed anchors at specific locations. Provides consistent reference points for training.

Data Persistence and Auto-Save

Auto-Save Frequency

Every 10 seconds + on every click - ensures no training data loss

Data Persists

All training points carry over to every session with unlimited accumulation

Training Limits

Unlimited calibration points available for maximum model accuracy

Session Tracking

localStorage tracks session count and usage patterns for model training

AI and LLM Training Purpose

All eye-tracking data, calibration points, facial landmarks, and interaction patterns collected during your use of CYNK MAAT are used to train and improve our artificial intelligence and large language model systems. This includes:

Gaze coordinates and patterns
Facial landmark data (468 MediaPipe points)
Calibration click positions
UI interaction behaviors
Dwell times and focus areas
Saccade patterns and eye movements

By using CYNK MAAT, you consent to this data collection for artificial intelligence and large language model training purposes. View our Privacy Policy for more details.

πŸ’¬

C CYNK BOX - Interactive Preview

🎨 CYNK BOX Editor

The visual editor is optimized for desktop use. For the best experience, please use a desktop or tablet in landscape mode.

You can still view demos and explore features on mobile.

Interactive Demo

Loading demo...