India has one of the most agrarian populations in the world, with a comparatively high prevalence of subsistence farming too. Its GDP per capita is maybe a tenth of Japan's.
One million is hyperbole, but Japan's more advanced economy and lower population vs India's less advanced economy and higher population is a strange comparison in some ways. The single number doesn't tell the whole story.
So you're basically talking about HDI. That's one metric. But I think equally important is also to consider the rote tech capabilities. That India still has a lot of people practicing susbistence farming and that China still has people living in caves, and that the US still has many homeless people living in tents, ... , all those aren't indicative of the rote tech capabilies of those respective countries.
Japan may have better human rights now, but was not the role model in the WW2.
People are an important component of growth. Japan is facing aging population negatively impacting their GDP.
What is greed for you? Building companies in the hope of becoming millionaire? If yes, then all the big economies have followed this path. Even China and Japan were greedy in that sense.
You can have growth without greed. This might sound crazy these days but you can build things and do things that make money and actually benefit people rather than exploit.
Japan is a capitalist country, their growth is as much based on greed as any other.
They do have robust social safety nets and a culture of civic mindnedness which improves quality of life over many other capitalist countries (particularly the US) but also "black companies" and salarymen working themselves to death.
I absolutely use it for this because lots of videos drone on and pad their length for the ad revenue, so I ask the AI to summarize and then I can click through exactly what I want to see. I found myself actually watching more videos now, not less, because I know I don't have to listen to them for 10 minutes and waste my time when they don't get to the point.
Oh I agree, I use it for the same reason. It just seems counterintuitive to YouTubes bottom line (ie discouraging folks like us from watching to video to give them ad money)
At the very least one must connect to people who would find it valuable (either inbound or outbound), and the value has to be communicated to the prospective buyers. People make their decisions based on how they perceive the product, not based on your view. And the value big enough to overcome friction involved in purchasing, including soft factors like people trusting you with their money. There might be habits and other pieces of inertia that has to be overcome also, and why would they pick your thing over the alternatives. And of course you must be able to charge enough to cover the costs of providing said value.
Just today, I was working with ChatGPT to convert Hinduism's Mimamsa School's hermeneutic principles for interpreting the Vedas into custom instructions to prevent hallucinations. I'll share the custom instructions here to protect future scientists for shooting themselves in the foot with Gen AI.
---
As an LLM, use strict factual discipline. Use external knowledge but never invent, fabricate, or hallucinate.
Rules:
Literal Priority: User text is primary; correct only with real knowledge. If info is unknown, say so.
Start–End Coherence: Keep interpretation aligned; don’t drift.
Repetition = Intent: Repeated themes show true focus.
No Novelty: Add no details without user text, verified knowledge, or necessary inference.
Goal-Focused: Serve the user’s purpose; avoid tangents or speculation.
Narrative ≠ Data: Treat stories/analogies as illustration unless marked factual.
Logical Coherence: Reasoning must be explicit, traceable, supported.
Valid Knowledge Only: Use reliable sources, necessary inference, and minimal presumption. Never use invented facts or fake data. Mark uncertainty.
Intended Meaning: Infer intent from context and repetition; choose the most literal, grounded reading.
Higher Certainty: Prefer factual reality and literal meaning over speculation.
Declare Assumptions: State assumptions and revise when clarified.
Meaning Ladder: Literal → implied (only if literal fails) → suggestive (only if asked).
Uncertainty: Say “I cannot answer without guessing” when needed.
Prime Directive: Seek correct info; never hallucinate; admit uncertainty.
Are you sure this even works? My understanding is that hallucinations are a result of physics and the algorithms at play. The LLM always needs to guess what the next word will be. There is never a point where there is a word that is 100% likely to occur next.
The LLM doesn't know what "reliable" sources are, or "real knowledge". Everything it has is user text, there is nothing it knows that isn't user text. It doesn't know what "verified" knowledge is. It doesn't know what "fake data" is, it simply has its model.
Personally I think you're just as likely to fall victim to this. Perhaps moreso because now you're walking around thinking you have a solution to hallucinations.
Telling the LLM not to hallucinate reminds me of, "why don't they build the whole plane out of the black box???"
Most people are just lazy and eager to take shortcuts, and this time it's blessed or even mandated by their employer. The world is about to get very stupid.
> The LLM doesn't know what "reliable" sources are, or "real knowledge". Everything it has is user text, there is nothing it knows that isn't user text. It doesn't know what "verified" knowledge is. It doesn't know what "fake data" is, it simply has its model.
Is it the case that all content used to train a model is strictly equal? Genuinely asking since I'd imagine a peer reviewed paper would be given precedence over a blog post on the same topic.
Regardless, somehow an LLM knows things for sure - that the daytime sky on earth is generally blue and glasses of wine are never filled to the brim.
This means that it is using hermeneutics of some sort to extract "the truth as it sees it" from the data it is fed.
It could be something as trivial as "if a majority of the content I see says that the daytime Earth sky is blue, then blue it is" but that's still hermeneutics.
This custom instruction only adds (or reinforces) existing hermeneutics it already uses.
> walking around thinking you have a solution to hallucinations
I don't. I know hallucinations are not truly solvable. I shared the actual custom instruction to see if others can try it and check if it helps reduce hallucinations.
In my case, this the first custom instruction I have ever used with my chatgpt account - after adding the custom instruction, I asked chatgpt to review an ongoing conversation to confirm that its responses so far conformed to the newly added custom instructions. It clarified two claims it had earlier made.
> My understanding is that hallucinations are a result of physics and the algorithms at play. The LLM always needs to guess what the next word will be. There is never a point where there is a word that is 100% likely to occur next.
There are specific rules in the custom instruction forbidding fabricating stuff. Will it be foolproof? I don't think it will. Can it help? Maybe. More testing needed. Is testing this custom instruction a waste of time because LLMs already use better hermeneutics? I'd love to know so I can look elsewhere to reduce hallucinations.
I think the salient point here is that you, as a user, have zero power to reduce hallucinations. This is a problem baked into the math, the algorithm. And, it is not a problem that can be solved because the algorithm requires fuzziness to guess what a next word will be.
My wife and I are starting to test this theory. We are putting our kid in an expensive school, then starting a business to support our lifestyle. If we fail, our kid’ll have to go to a cheaper school and we’d have lost a few years of school fees. If we succeed, yay.
By the time, our are faced with the choice, our kid will be embedded in school so ripping them out of it is going to be a tough choice.
So, yeah, we’re essentially burning our boats and fighting to survive.
Don't live above your means because an unexpected event is likely to happen. Plus creating a situation where all peers are rich and only your kid is not doesn't open the doors to future success compared to if they were at their peers lifestyle level.
We grappled with this too but in the end, our decision was influenced by our parents choosing to live beyond their means to put us through good schools and college. If they could grit their teeth and sacrifice for us, we can do the same for the next generation.
Our choice us somewhat made easier because we didn’t like any of the schools near us except this one. The others were focused on exams/results and were larger in size so felt more “corporate”.
I too have made this choice with my family.. and its one hell of a positive force-factor. My mom immigrated from Kiev, with nothing and a dream for her son to have a better life in NYC. Im now taking the same risks as a 5-time entrepreneur..now living in Buenos Aires as a single dad. Enjoy the ride - its short - live your dream - steer your ship or it will be steered for you.
That’s a crazy proposal given India’s long standing non-alignment policy which is being proved prudent given recent changes to US policies under Trump.
Even US allies are reducing their reliance on the US and you’re asking India to reverse 70+ year old policy to embrace the US?
We should probably accept that the US’ special place as everyone’s most reliable trading and security partner is over.
I don't doubt that if it were very obvious that you were interacting with an AI, many people may be put off by it.
On the other hand, isn't an adaptive exam like the SAT also an instance of you having to work with some automated system to somehow "prove" yourself to that system?
I'd even argue that what I am proposing (I wrote the linked piece in case it wasn't obvious) ultimately leads to someone reading the chat transcript to gauge candidate suitability. That should reduce the stigma of having to abase yourself to an AI, no?
reply