How I builtNeuron

The story of how a small side project became a real product.

The Past • "Codemon"

I felt like a genius

Back in college, I made a simple project called "Codemon". It took user code and ran it directly on my laptop using a basic script.

I deployed it, felt amazing, and went silent for months. I literally thought, "Cloud computing is so easy, why does everyone complain?"

Months later, I woke up with a scary thought:
"Wait... if someone writes code to delete files, my entire server vanishes."

I realized I hadn't built a product. I had built a button for strangers to destroy my server.
The Spark

"Too Expensive"

Recently, a tech influencer friend told me he wanted to add a "Run Code" feature to his blog. He checked tools like Judge0, but they were too expensive for a small project.

My engineering brain kicked in:
"Detailed services are costly? I can build one for free on a cheap server."

(It was not free. It cost him his sleep.)

Phase 1 • The Freeze

Cooking my Mac

For my first attempt, I also used Kafka because "big companies use it". It turned out to be overkill—taking 1.5 seconds just to pick up a task.

I fixed that by switching to Redis, but then I hit a bigger wall. I made a service that starts a new Docker container for every request.

I thought I was a genius again. So I fired 1,000 requests at it to "stress test" it.
Boom. My System froze. The music stopped. My laptop turned into a brick.

Lesson: You cannot spin up and destroy 1,000 virtual computers in seconds.
Phase 2 • The Queue

Stable, but Slow

To fix the crashing, I limited the number of active containers (e.g., max 10 at a time). If more requests came in, they had to wait in line.

This stopped the crashing, but it was still bad. Even if there was no queue, every user had to wait 2 seconds for their container to boot up ("Cold Start").

It felt sluggish. I needed it to be instant.

Phase 3 • The Solution

Reuse, Reuse, Reuse

The final fix was Pooling. Instead of throwing containers away after one use, I started creating them in advance and reusing them.

Now, a pool of containers sits idle, waiting for work. When you send code, it runs instantly and after work is done we sanitize it and put it back to pool.
Zero loading time. Zero waiting.

Present • Architecture

The Stack that Survived

After rewriting the engine 3 times, this is the stack that finally stopped the crashes:

  • GoBackend: Switched from Node.js to Golang. Needed raw concurrency for managing Docker pools.
  • RedisQueue: Redis Streams. No Kafka bloat. Sub-5ms latency.
  • DockerRunner: Pre-warmed containers with strict resource limits (CPU/RAM).
Future • Optimization

The Need for Speed

The system works, but I want it faster. Current benchmarks are decent but not perfect:
JS / Python .... ~250ms
C++ ............ ~700ms
Java ........... ~900ms

My goal is to drastically reduce these times.

I am actively exploring next-gen isolation tech like Firecracker VMs (used by AWS), gVisor, and Warp to replace standard Docker containers.
Docker was just the beginning.