Hacker Newsnew | past | comments | ask | show | jobs | submit | spyder's commentslogin

"Now I can just tell Claude to write an article (like the one you're currently reading) and give it some pointers regarding how I want it to look, and it can generate any custom HTML and CSS and JavaScript I want on the fly."

Yea, I know that was the case when I clicked on the thumbnails and couldn't close the image and had to reload the whole page. Good thing that you could just ask AI to fix this, but the bad thing is that you assumed it would produce fully working code in one shot and didn't test it properly.


When I asked it to add the gallery I also asked it to make sure the images close if you press escape or outside the image. I guess I wasn't thinking about mobile users, but definitely on me, not Claude there :)

*EDIT* prominent close button and closing on back navigation added (probably people will complain about hijackng the back button now)


and again we can tell based on how the x isn’t centered in the close button

If a button with content out of center is a clear sign of LLM use, these tools are decades older than I realized.

I wouldn't call it a clear sign of LLM use myself but in the year of our lord 2025 it should be unheard of, we've got so many nice tools for layouting nowadays. It's certainly below par if LLMs can't reliably manage it.

Bold of you to assume that they assumed something instead of thinking that they might just not give a shit about it.

To be fair a lot of custom-built websites are crap too and generally they cost a lot more time and money.

It's likely an arcing powerline (see the reddit comments): https://www.reddit.com/r/videos/comments/1lrk1rz/incredible_...


Reading through the comments and reviewing the video does indeed point to arcing power lines. Ive seen videos of fast moving arcs across medium voltage lines that looked like a horizontal jacobs ladder. The lines over current protection equipment might not instantly trip as the current might be limited by enough impedance in the equipment. Disappointing reveal.


That sounds plausible.


Correct me if I'm wrong but looking at the video this just looks like a 3D point cloud using equal-sized "gaussians" (soft spheres) for each pixel, that's why it looks still pixelated especially at the edges. Even when it's low resolution the real gaussian splatting artifacts look different with spikes an soft blobs at the lower resolution parts. So this is not really doing the same as a real gaussian splatting of combining different sized view-dependent elliptic gaussians splats to reconstruct the scene and also this doesn't seem to reproduce the radiance field as the real gaussian splatting does.


I had to make a lot of concessions to make this work in real-time. There is no way that I know to replicate the fidelity of "actual" Gaussian splatting training process within the 33ms frame budget.

However, I have not baked in the size or orientation into the system. Those are "chosen" by the neural net based on the input RGBD frames. The view dependent effects are also "chosen" by the neural net, but not through an explicit radiance field. If you run the application and zoom in, you will be able to see the splats of different sizes pointing in different directions. The system as limited ability to re-adjust the positions and sizes due to the compute budget leading to the pixelated effect.


I've uploaded a screenshot from LiveSplat where I zoomed in a lot on a piece of fabric. You can see that there is actually a lot of diversity in the shape, orientation, and opacity of the Gaussians produced [1].

[1] https://imgur.com/a/QXxCakM


Here is a cool demonstration to do voice-to-instrument or instrument-to-another instrument (The inconvenient thing is that for a new kind of output sound you have to train a model for around 1 hour for good quality, but after that you can use it with different inputs quickly):

https://youtu.be/lI1LCfTx2lI?t=525

There is also Kits.ai https://www.kits.ai/tools/ai-instruments


From their about page:

Videolabs was born from the VideoLAN community and started by maintaining the VLC ports on mobile. It is now the main contributor to VLC, hiring its historical developers, and building custom solutions around the VLC and FFmpeg ecosystems.


The article says that Bose gave up and sold the rights because: "Sadly it was totally impractical for mass production in 2004"

What does that mean? Was it too complex or costly to make it back then? Did that change now with your developments?

Better tech is nice but if it's to expensive or hard to mass produce then it could end up the same way as with Bose.


To create enough force to lift the car using linear motors requires massive copper coils and rare earth magnets, which is heavy and expensive. It’s also wasteful, because in real world drive cycles peak force is only needed for short periods of time and at relatively low linear velocities. Further, linear motors require significant packaging space which requires the chassis be designed around it. Instead, we use a much smaller motor and a high “gear ratio” using hydraulics to achieve high forces via mechanical gain. This reduces raw material cost by a factor of >10x while achieving similar performance.


If I recall it was heavy. I think speculation had it adding over 20% to the weight of the Lexus.

And those electric motors and control hardware would not have been cheap. But nobody knows how to estimate it - just that it would be prohibitively expensive.


There is also KaraKeep:

https://github.com/karakeep-app/karakeep

Seems very similar.


Also at many places the cracks seems to be overpainted, filled in. I guess that's from the restorations...

I just looked it up and there is a picture from an analysis where they are showing its possible state before the restoration:

https://media.springernature.com/lw685/springer-static/image...

"In UV fluorescence, the natural resin varnish layer fluoresces greenish, and areas retouched in 1994 can be distinguished from the original paint as they appear darker"

The full paper: https://www.nature.com/articles/s40494-019-0307-5


and the next step is a monthly subscription fee to use your camera...


I bought a GoPro last year after reading about a feature HyperSmooth Pro stabilization. Granted, I didn't do an inquisition into the specifics just thinking if I bought their best camera it was supported. Feature is only available if I also subscribe to their $100 per year membership. I think without the membership some lesser, non-Pro version is available. It felt like a bait and switch situation to me when I unboxed it and tried to use the feature the first time.


If you’ve read this first here, mark the day. This is coming.


Huh "everything text-to-X"? Most video gen AI has image-to-video option too either as a start or end frame or just as a reference for subjects and environment to include in the video. Some of them even has video-to-video options too, to restyle the visuals or reuse motions from the reference video.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: