R1HIM
Developer

Projects

My name is Oleg Rakhimov (R1HIM). Here you can find my projects, media (videos/screenshots/PDF) and contacts.

TIXO — ambient scenes & music player

Relaxation and ambience app with selectable scenes (video backgrounds), a built-in music player, and a session timer.

Google PlayMedia playbackSession timerUI/UX

Work Time — shift tracking and calculations

Shift tracking app: calendar, history, hours calculation, and payroll-related calculators.

KotlinJetpack ComposeMVVMCoroutinesFlow
  • Shift entry (start/end/break) with automatic hours calculation
  • Monthly calendar overview and editing of entries
  • Calculators (net/gross), settings, profile, and in-app user guide
Work Time preview

Nobel Tower — building maintenance management system

Ecosystem for handling requests and planned maintenance in an office building with transparent workflows for tenants, technicians, and admins.

QR navigationRequest workflowPhoto/Video evidencePDF protocolsNotifications
  • QR identification for rooms/equipment with object history
  • Requests with statuses, photo/video evidence, and tenant sign-off
  • PDF protocols/reporting and an admin panel
  • Planned maintenance: calendar, reminders, push/email
Nobel Tower preview

local-meet-translator

Self-hosted prototype for near real-time translation of web calls (Google Meet / Zoom Web / Teams Web).

Chrome/Edge Extension (MV3)Java 21 Local BridgeOpenAI STT & TTSlocalhost token

The OpenAI key is not stored in the extension: all requests go through a localhost bridge service protected by a token.

Details
Key idea
  • The OpenAI API key is stored only locally (.env on the PC).
  • The browser extension uses only a local token (X-Auth-Token), not the OpenAI key.
Tech stack
  • Extension: JavaScript/HTML/CSS, Chrome Extensions MV3 (tabCapture, getUserMedia, offscreen, content_script).
  • Local Bridge: Java 21, HttpServer/HttpClient (JDK), Jackson, Maven (shade-plugin), PowerShell/Batch startup.
Incoming translation (subtitles)
  • tabCapture → MediaRecorder chunks (OGG/Opus preferred).
  • POST /transcribe-and-translate → OpenAI STT (whisper-1) → translation via /v1/responses (gpt-4o-mini, temperature=0).
  • Overlay subtitles on top of the call page (content_script).
Outgoing voice translation (TTS)
  • Microphone via getUserMedia → chunks → transcribe/translate.
  • POST /tts → OpenAI TTS (/v1/audio/speech, gpt-4o-mini-tts, mp3).
  • Play back into a virtual audio cable (e.g., VB-Audio Virtual Cable) so Meet can use it as a microphone.
Safeguards
  • VAD / silence threshold (RMS): silent chunks are not sent for ASR.
  • Mute mic during TTS: mic chunks are ignored while TTS is playing.
  • Text deduplication: similar ASR outputs within a short window are skipped.
Local API (127.0.0.1)
  • Requires X-Auth-Token == LOCAL_MEET_TRANSLATOR_TOKEN.
  • GET /health, POST /translate-text, POST /transcribe-and-translate, POST /tts (ENABLE_TTS=true).
Configuration (env)
  • OPENAI_API_KEY, LOCAL_MEET_TRANSLATOR_PORT (8799), LOCAL_MEET_TRANSLATOR_TOKEN.
  • OPENAI_TRANSCRIBE_MODEL (whisper-1), OPENAI_TEXT_MODEL (gpt-4o-mini).
  • ENABLE_TTS, OPENAI_TTS_MODEL (gpt-4o-mini-tts).