However, the US government has / can have control over Micron's production. They are headquartered in the US. They have the intellectual property and know-how to erect a vertically integrated supply chain. Europe doesn't have this strategic investment.
We were not a ChatGPT wrapper; we used a finetuned open-source model running on our own hardware, so we naturally had full control of the input parameters. I apologize if my language was ambiguous, but by "expose seeds" I simply meant users can see the seed used for each prompt and input their own in the UI, rather than "exposing secrets" of the frontier LLM APIs, if that's what you took it to mean.
I just wanted deterministic outputs and was curious how you were doing it. Sounds like probably temp = 0, which major providers no longer offer. Thanks for your response.
No, seed and temperature are separate parameters accepted by the inference engine. You can still get deterministic outputs with high temp if you're using the same seed, provided the inference engine itself operates in a deterministic manner, and the hardware is deterministic (in testing, we did observe small non-deterministic variations when running the same prompt on the same stack but a different model of GPU).
It might have been explicitly targeted, but they did say that there were older versions of Notepad ++ with ""insufficient update verification controls" so it might have just been there was only one subset of users actually susceptible to this.
No, the additional update verification was added after this attack was discovered. All Notepad++ installations were vulnerable during the time of the hijacking campaign.
I believe the what the parent comment was referring to is the advice not to praise character, but instead praise hard work.
“You’re so smart” leaves room for failure when they encounter something that challenges their image of being smart. Praising the amount of effort they put in is not something that is taken away or challenged regardless of the outcome.
One of my kids are particularly brilliant and what I found is that the combination works best, "you are smart therefore I have high expectations" AND "without doing the work being smart doesn't matter". Together this creates a self image of the capable doer.
I think it’s also largely driven by the apparently cheapness of turning the CapEX of server buying to the OpEX of cloud renting. Less up front investment and auditing/access controls for SoC2 compliant are so much easier m.
It may be that it was time for the hardware that was previously running Arxiv to be retired and this is just another Capex -> Opex decision being made by so many tech companies.
I'd like to know if GCP is covering part of the bill? Or will Cornell be paying all of it? The new architecture smells of "[GCP] will pay/credit all of these new services if you agree to let one of our architects work with you". If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.
> If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.
"Google pays to run an enormous intellectual resource in exchange for a self-congratulatory blogpost" seems like a perfectly acceptable outcome for society here.
> If GCP is helping, stay tuned for a blog post from google some time around the completion of the migration with a title like "Reaffirming our commitment to science" or something similarly self affirming.
This is an odd criticism. If a company is footing the bill, it can’t even talk about it to gain some publicity/good will?
How much is the bill for running Arxiv? $1000 - $3000/month? Yeah, I don't think Google deserves any recognition for footing that bill. Likely just another self-congratulatory bullshit move on behalf of big G.
"Reaffirming our commitment to science" or something similarly self affirming.
While I understand that something is more genuine if done in secret, it doesn't stop being a real commitment to science just because you make a pr post about it.
If company X contributes to Y open source foundation, that's real and they get to claim clout, nobody cares about a post anyways.
I believe they are using scalable TTC. The o3 announcement released accuracy numbers for high and low compute usage, which I feel would be hard to do in the same model without TTC.
I also believe that the 200$ subscription they offer is just them allowing the TTC to go for longer before forcing it to answer.
If what you say is true, though, I agree that there is a huge headroom for TTC to improve results if the huggingface experiments on 1/3B models are anything to go off.
The other comment posted YT videos where Open AI researchers are talking about TTC. So, I am wrong. That $200 subscription is just because the number of tokens generated are huge when CoT is involved. Usually inference output is capped at 2000-4000 tokens (max of ~8192) or so, but they cannot do it with o1 and all the thinking tokens involved. This is true with all the approaches - next token prediction, TTC with beam/lookahead search, or MCTS + TTC. If you specify the output token range as high and induce a model to think before it answers, you will get better results on smaller/local models too.
> huge headroom for TTC to improve results ...1B/3B models
Absolutely. How this is productized remains to be seen. I have high hopes with MCTS and Iterative Preference Learning, but it is harder to implement. Not sure if Open AI has done that. Though Deepmind's results are unbelievably good [1].
ttc is an incredibly broad term and it is broadening as the hype spreads. people are now calling CoT “TTC” because they are spending compute on reasoning tokens before answering
Yes, and HuggingFace have published this outlining some of the potential ways to use TTC, including but not limited to tree search, showing TTC performance gains from LLama.
reply