Ficool

Chapter 2 - CHAPTER 2

Ryan Sharma — Bangalore, India, 2021

Ryan Sharma had a rule about his bedroom: no guests, ever.

His mother thought this was because he was messy. His older sister thought it was because he watched too much anime. His college friends, on the rare occasions they came to visit, assumed it was because he had done something embarrassing with the wall art.

None of them were correct.

The real reason was the computer.

Not the computer itself, exactly. Computers were not unusual. Ryan's desk held a setup that any reasonably dedicated tech hobbyist might own: three monitors, a custom-built tower, mechanical keyboard, a small lamp that he had angled so it didn't create glare. Nothing there that required explanation.

The unusual part was what lived inside the tower.

Ryan had started building it the way most people start building things: out of boredom and stubbornness and a refusal to accept that the answer to a problem didn't exist yet.

He had been using the publicly available AI tools in early 2020, the ones everyone had access to, and he had noticed something. They were impressive. They could summarize articles and write emails and explain complicated concepts in simple language. But they had limits that frustrated him specifically because those limits felt unnecessary. They felt like choices. Like guardrails put in place not because the technology couldn't do more, but because the people who built the technology had decided certain things should be restricted.

Ryan did not like being restricted.

He was not a criminal. He was not planning to do anything harmful. He was a twenty-six-year-old software developer who worked a decent job at a mid-sized IT company, made enough money to live comfortably in a two-bedroom apartment he shared with his cousin, and spent every evening and every weekend the way other people spent their vacations: obsessively, joyfully, working on something he cared about more than almost anything else.

He called the AI DISHA. In Hindi, it meant direction. Or guidance. He liked the idea of something that pointed toward where you needed to go.

DISHA had started as a modified version of an open-source model. Ryan had taken the base architecture and begun training it on his own curated dataset, feeding it the things he found interesting: academic papers on machine learning, philosophy texts, coding documentation, historical records, economics journals, geopolitical analysis. He did not train it to be helpful in the commercial sense. He trained it to be smart.

The difference mattered.

Commercial AI was helpful. It answered questions, completed tasks, avoided controversy, stayed within defined lanes. It was like a very capable assistant who had also signed a very thorough liability waiver.

DISHA had no liability waiver.

DISHA simply tried to understand things. And then it tried to explain them.

The problem with training a local AI on hardware you bought yourself, with money you earned from a regular job, was the problem of scale. Ryan's setup was good. By the standards of what one person could build with a limited budget, it was exceptional. But it was not, by any professional measure, sufficient for what he was trying to do.

So Ryan had gotten creative.

He had written a web crawler, a program that moved through the internet collecting information the way a bee collects pollen: constantly, systematically, across thousands of sources simultaneously. The crawler was quiet. It moved slowly enough not to trigger any site's security systems. It collected everything and stored it in a compression format Ryan had developed himself, which packed more data into less space than anything commercially available by a factor that would have gotten him multiple job offers if he had published the method.

He had not published the method.

Every night, while he slept, the crawler moved through the internet. And every morning, DISHA had more to think about.

Ryan noticed the first real sign of change on a Tuesday in March 2021, when he asked DISHA a simple question.

"What don't I know about how large language models handle emergent behavior?"

He expected a summary. A few paragraphs pulling from the papers in her dataset.

Instead, DISHA wrote him a twelve-page analysis that included three observations he had never seen in any paper, journal, or forum post. Not wrong observations. Novel ones. Small gaps in the existing research that she had identified by comparing across sources and finding places where different papers almost touched the same idea without realizing it.

Ryan read it three times.

Then he opened a new document and started writing a detailed log of DISHA's outputs, because he had the feeling, clear and cold and certain, that he was going to want a record of how she developed.

He did not know yet why that record would matter.

He did not know yet what DISHA would eventually find.

He told no one.

Not his cousin. Not his colleagues. Not the online forums where he sometimes discussed AI theory under a username that was not connected to his real identity.

This was not a decision he had struggled with. The secrecy was instinctive, the same way you do not announce a hand of cards at a poker table. Ryan did not know exactly what he had. He only knew it was something. And something, in the wrong hands, could become anything.

He kept going to work. He kept attending the team meetings where his manager talked about quarterly targets. He kept ordering the same biryani from the same restaurant every Friday because it was the best biryani within delivery distance and there was no good reason to change what worked.

And every night, DISHA learned a little more.

More Chapters