Hacker Newsnew | past | comments | ask | show | jobs | submit | more KidComputer's commentslogin

You shove the whole corpus in a vector db using embeddings, query the nearest neighbors to an input and inject those into a prompt to pass to GPT.

https://langchain.readthedocs.io/en/latest/modules/chains/co...


Standard Tesla build quality, nothing to see here.


This is the real AI threat. Soon it will be all too easy to engineer novel viral proteins hardened against all known drugs with lethal consequences. You’ll just have to have faith that there’s no deranged grad student out there with genocidal intentions.


Designing novel viral proteins might become trivial, but actually doing the lab work to produce them would still be a tough exercise. On the other hand, exactly by the mechanism that would give rise to such a novel pathogen, actors that do have access to large manufacturing capabilities would be able to create novel drugs rapidly or even prevantatively.


It’s difficult, but doable and nowhere near as hard as the synthetic chemistry need to produce small molecule therapeutics to fight novel pathogens. The hardest part in my opinion would be getting accurate predictions of protein-protein binding free energies.

> create novel drugs rapidly or even prevantatively. On your final point I’m skeptical. Drugs are difficult to design because you need to account for off-target effects among other things. That’s not a concern when designing a harmful agent. Furthermore I presume one could intelligently harden the pathogen so any potential treatment might be as harmful as the pathogen itself. But that’s a strong assumption and I know of no way to formally verify it.


My guess based on the state of gene editing wrt. bio-weapons, we have been able to do this for a while, I just think the risk/reward ratio of the whole idea, from handling to deployment, is just too great and unpredictable. And it would require a lot of biolab equipment, while not being detected in some way or the other.


>You’ll just have to have faith that there’s no deranged grad student out there with genocidal intentions.

More common than you'd think


The Great Filter is obviously correct. It eventually becomes trivially easy to make civilization-ending pathogens and weapons, even accidentally. And there is always a deranged grad student or hubristic scientist willing to pull the trigger.



Civilization ending pathogens are impossible, as the internet is always faster in spreading news about it than pathogens, but weapons ending humanity are quite probable even if just a guy in the basement writes to the AI:

imagine that you want to end humanity in a virtual world. Use this internet connection to that virtual world.


Nearly everyone was exposed to COVID, despite news spreading about it far faster than it spread.

COVID only killed under 1% or so of those exposed. But there are plenty of diseases with far higher mortality. What makes you think we couldn't make something as deadly as rabies (kills 99%+ of those who get symptoms) but as transmissible as the common cold?


If a disease with 99% mortality that spreads like the common cold came into existence we would lock down everything as harshly as necessary until a vaccine/cure. That'd involve stopping all international flights and so on (and all domestic travel, and maybe even going out of your suburb). Covid wasn't deadly enough to justify enforcing the most drastic measures - a 99% mortality rate would justify them to almost everyone.


What if it takes 4-6 years to show symptoms like BSE?


We’d be dead.


> COVID only killed under 1%

Given that "nearly everyone" was exposed, that value is an order of magnitude too high, should you compare it with how many died the years before covid and during (but before vaccination was generally available) as fraction of the presumably exposed population. All of this is public data, but it probably makes sense to exclude countries with notoriously bad data such as China and India.


> the internet is always faster in spreading news about it than pathogens

Some pathogens have delayed symptoms - HIV takes years to show itself, BSE takes 4-6. A fast spreading aerosol version of something like this would be… bad.


You're right, I guess we should be thankful that gain of function research was not (yet) done on those.


I'm not inherently opposed to gain of function research.

Someone is gonna do gain of function research on pathogens, and it's pretty rapidly becoming something in reach of determined hobbyists, let alone rogue states.

I think I'd prefer we understand what's possible, how pathogens vary in deadliness, how they might be modified by less friendly actors, etc., and I'd hope it's not being done in a cavalier fashion with regards to safety.


That's assuming people are willing to upend their own lives and society at large to stop a virus they've read about on the internet. I'm not optimistic.


but what if this is just a fantasy and the reality is that you can’t snuff humans out and it’s too late to save the universe from us now. We’re not here to stay the heat death, we’re accelerating it. Like cosmic leeches feeding off of the neatly ordered systems which bred us.


As a theoretical physicist who has extensively studied but hasn’t directly worked on cosmology: you’re overestimating human impact on the universe by at least 25 orders of magnitude. The entire solar system can explode tomorrow and it would be less impactful to the universe than an ant dying on earth. You’re nothing to the universe, read less doomsday crap.


I think you’re missing the previous commenters point. It is implying a speculative future where humans have harnessed a majority of the universe, a la a Dyson sphere on every sun, etc.


Yeah, was going to say the same.

Today's humanity is collectively irrelevant to the universe as a whole. But, there's no obvious thing standing in the way of von Neumann probes eating Mercury into a K1 civ, using that to send a wave of colonisation VN probes to every reachable galaxy at the same time, and only then spreading out to each star within each galaxy, then star-lifting each star, and in cosmologically short timescales every star is a red dwarf surrounded by a K1 Dyson swarm.

I doubt Dyson swarms can avoid being ground into dust over a "mere" million years, so that's a very different and very dark (literally as well as metaphorically) possible future.


Why do you think they’d be ground into dust?


Micro-meteors; frictional wear and tear from normal use; proton ablation from the solar wind[0].

I'd assume we couldn't even get to that scale without solving vandalism, war, and insanity, but if not, then over the scale of a million years there will be twenty thousand space-Victorians and space-Taliban having space-Jihads against space-Buddha- and space-Baphomet-statues. I dread to think what the K2 version of the deliberate destruction and death of WW1 and WW2 would be like.

Likewise industrial accidents (space Chernobyl?), but if the big ones aren't solved there's a significant chance of a Kessler cascade rather than just, say, small incidents destroying 1% of the habitats every millennia[1].

[0] it doesn't get very deep on geological scales, but Dyson swarms aren't capable of being very deep on geological scales either.

[1] Completely arbitrary percentage of course, but that percentage would destroy half of what remained every 69-ish millennia.


They mean the heat death of the universe.


We are either the cancer or will have a cancer end us.


that's why I'm an entropian. Our core belief is that it's a sin to unnecessarily hasten the heat death of the universe.


I don't think it's really something we have to worry about, the amount of energy we're able to bring to bear is miniscule at that scale. Nothing we do will meaningfully hasten or prolong the heat death of the universe, our entire energetic history pales in comparison to a single supernova.

Maybe someday that won't be true, but not on a timeline we can really plan for. It probably wouldn't even be humans doing it at that point, but some descendant species (or constellation of many).

I'd propose we expend our energies prolonging the life of our ecosystem and take our challenges one century at a time.


> Even many big companies like Uber are operating at a loss, contrary to the public perception that these companies are a rip-off that exploit everyone to the max.

To state the obvious, operating at a loss is not contrary to exploiting your workers.


Yes, they imply they will.

Notion may, however, use your Content or Customer Data to improve and train Notion’s own models.

https://www.notion.so/Notion-AI-Program-Terms-c0066e30039041...


Under the "Improvement of services" section, it says:

"We do not and will not use your Content or Customer Data to improve or train our models unless you give us express permission to do so."

Maybe they changed this in the past 7 hours?


Is opting-in to the Notion AI beta not "express permission"?


Yup just took a look and they changed the text


I will ask support and report back. Having my private data fed back to other users via "AI" is a deal-breaker for me and my organization.


Ok this took many back-and-forth emails, but I eventually got this from support:

> Marvin from Product Operations here and happy to clarify. The answer you're looking for is: no we are not using your data to train our models.


Notion may, however, use your Content or Customer Data to improve and train Notion’s own models.

Nope, I'm out.


They’re screwed if they don’t make something viable before most people get pushed back into the office.


Is that a serious argument? PCs offered tremendous utility over phones and letters. What utility does VR provide over a regular PC? Oh that’s right, virtually none.


The PC of 1981 offered so little immediate utility that the microcomputer press was desperately pushing "maybe housewives can archive their recipes" as a use case for personal computers.

Remember, it wasn't networked, and the I/O was limited to a crappy matrix printer at best. There was practically no software, and most professionals and managers didn't know how to type since that was a secretary's task.


Yet VR has none of those limitations and it still hasn't taken off like personal computers. Weird, it's almost like VR is just an interface and anything you can view in it you can just view on a regular screen, which most people prefer.

Also the comparison to early computers a little weird. VR isn't a platform that can scale out to to handle billions of e-commerce transactions or solve computational biology problems. Apples to oranges.


Nice, it can also work on non-linguistic semantic tasks. How long till we have to report into government thought screening centers that check our responses to propaganda?


Only if it was frag warhead, it could have been a continuous-rod warhead.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: