Senior InfoSec Engineer · LLMOps

Artem Borisov
Building AI systems for cybersecurity

4+ years in information security. I build LLM tools for the SOC: RAG agents for log normalization, recommendation generators, and production vLLM / LiteLLM / MLflow infrastructure.

01 / About

About

Senior Information Security Engineer at R-Vision (since October 2023), Technical Expertise Department — AI/ML tools for cybersecurity.

Primary focus: LLM applications for the SOC — VRL Normalization Agent (RAG on Claude Sonnet + ChromaDB/Qdrant for SIEM log normalization), documentation chatbots, log explanation agents, incident recommendation generators.

In parallel I work hands-on with infrastructure: Kubernetes, ClickHouse, PostgreSQL, Docker, SIEM/SOAR platforms (R-Vision, QRadar). Based in Moscow.

4+
yrs in InfoSec
25K+
EPS in prod
10K+
entities in RAG
12
stage AI pipeline
02 / Projects

Projects

Switch between roles to see relevant work

2024—2025vrl-agent

VRL Normalization Agent

AI platform that auto-generates normalization rules for R-Vision SIEM. Ingests raw logs (syslog, CEF, JSON, kv) and produces ready VRL rules via LLM + RAG with multi-stage validation.

Highlights
  • 12-stage pipeline: dedupe → auto-detect log format → RAG search for similar rules → VRL generation (normalizer + filter) → iterative refine → tests → YAML.
  • Best-of-N sampling with varying temperature; winner picked by error count.
  • ReAct chat with tool calling; SSE streaming of live generation progress.
  • LLM response caching (pickle + TTL); async-first stack (asyncpg, aiohttp).
  • Quality Validation — an LLM judges the semantic correctness of the generated rule.
  • ~64 Python backend modules, ~23 TS frontend components, 5 Docker services.
Stack
PythonFastAPIPostgreSQLClaude Sonnet 4.5OpenRouterRAG / 3SReactTypeScriptViteDockerJWT
Screenshots
2024—20253s

3S — Soft Search Service

Semantic search service for documents and structured data with RAG support, hybrid search, and large collections (10K+ entities).

Highlights
  • Per-field named vectors — each content field is embedded separately into its own Qdrant named vector.
  • Entity Similarity with weighted scoring: avg_score * (1 + 0.1 * match_count) — bonus for multi-field matches.
  • Two-phase metadata-first algorithm to bypass Qdrant's limit on large collections: scroll_all + vector search with MatchAny.
  • Three search modes — vector / bm25 / hybrid (RRF), flashrank reranking, AND/OR filters.
  • Multimodal: PDF/DOCX/PPTX + image OCR captions via BLIP / Florence-2.
  • JWT + per-user API tokens, isolated spaces (multitenancy), 57 documented endpoints.
Stack
FastAPISQLAlchemy 2.0asyncpgCelery + RedisQdrantMinIOfastembed (e5-large)flashrankReact 18TypeScriptTailwindDocker Compose
Screenshots
2025vllm-infra

On-premise LLM infrastructure

vLLM + Qwen3 / Qwen3.5 MoE on RTX 6000 Ada, LiteLLM proxy as a single entry point, MLflow Prompt Registry, dev/prod segmentation architecture.

Highlights
  • Deployed vLLM with Qwen3.5-35B-A3B-GPTQ-Int4: awq_marlin quantization, fp8 KV-cache, prefix caching, chunked prefill, tool calling (hermes parser), 24K context.
  • Upgraded vLLM to 0.17.1 for Qwen3.5 MoE support.
  • Production LiteLLM proxy stack (PostgreSQL 16 + Redis 7 in Docker) — single entry point for all prod LLM calls.
  • MLflow on CentOS 9 with MinIO artifact store: Prompt Registry, Dataset tracking, AI Gateway.
  • Architectural planning for dev/prod segmentation: server sizing, naming scheme, VM migration, inter-segment flows, network and server diagrams.
Stack
vLLMQwen3 / Qwen3.5 MoELiteLLMPostgreSQL 16Redis 7MLflowMinIODockerNVIDIA CUDACentOS 9
2024lecture

University lecture: AI in Cybersecurity

Guest lecture for CS students on applying LLMs and ML to infosec tasks — evangelism and talent pipeline.

Highlights
  • SOC automation via LLMs — where it works and where it doesn't.
  • RAG agents for SIEM log normalization, using the VRL Normalization Agent as a case study.
  • Incident recommendation generation and log explanation.
  • Limits and risks of LLMs inside isolated networks — isolation, auditability, prompt injection.
Stack
LLMRAGSIEMSOC automation
03 / Contact

Contact

Open to discussing interesting problems and collaboration.