Hacker Newsnew | past | comments | ask | show | jobs | submit | eisa01's commentslogin

You can get them with different voltage drop-off curves

E.g., this battery is 1.5V for ~70% of the capacity, before it gradually reduce to 1.0 V

https://www.xtar.cc/product/xtar-1-5v-aa-clr-3300-lithium-ba...

edit: And one with USB-C and linearly decreasing voltage curve https://www.xtar.cc/product/xtar-aa-lithium-lr-2000mah-usb-c...


That's cool, thanks for the links.


It is in the pipeline at least: https://github.com/jellyfin/Swiftfin/discussions/1294

As my needs are quite simple, I currently just use VLC with a SMB share. Works quite well, VLC is able to play standard .mkvs just fine! http://www.videolan.org/vlc/download-appletv.html


Oh, and this just dropped - a new open source Jellyfin client: https://github.com/ghobs91/mediora

Even supports some of that *arr stuff


That's because the car lobby only cared about electric vehicle tariffs, the petrol cars from China are tax free

(There's also anti-dumping tariffs on electric bikes from China, I wonder if it's the same lobby...)


Two years back the S-bahn ticket machines at the aiport only supported chip+pin, not contactless. Had to open my banking app to figure out my pin code, as I wanted to use my corporate Amex


If you just want to add POI data, then Every Door is a good choice that also works on iOS

CoMaps would be a good map app, and it will also display when POIs and opening hours were last confirmed (the only OSM app to do so AFAIK)

https://every-door.app https://www.comaps.app


I'm skipping Tahoe for now as well, but is it safe to upgrade my iOS devices to iOS 26? No sync issues or anything with my MBP?


It's generally best to update both in tandem.


Just discovered this, but there's now native support for the Ribbon alt-shortcuts as on Windows!

You can finally use a MacBook Pro in a professional environment that heavily use Excel, like consulting, without downsides.


So this will compete with Z-wave, that already operate in the 800-900 MHz space?

It's very confusing with a new Zigbee standard when I thought it was being replaced with Thread


It's very confusing with a new Zigbee standard when I thought it was being replaced with Thread

And from the same organization that co-designed and promotes Matter.

Personally, I'm very happy that Zigbee continues to be developed. I am not very enthusiastic about everything being IP addressable (even if it is just ULA) and the convoluted Thread flow where a bunch of border routers require you to initiate the pairing with a phone over BLE to hand it over to the router. I have an Eve Energy Thread + Matter plug and it requires many takes to pair it correctly. Most Zigbee devices are just a matter of permitting join on your coordinator and holding/pressing the pairing button on the device.

I wish that Apple and Google would just add a Zigbee coordinator (or just leave home automation to others) and put some effort into supporting a wide variety of devices rather than trying to disrupt a perfectly working standard. They often cannot even bother implementing the latest Matter spec timely.


As someone who's run (and continues to run) both ZWave and Zigbee networks for over 10 years I find the direction of Zigbee rather frustrating. They used to be the antithesis of Zwave with it's frustrating "licensing" and now seems to be as though they're towing that line. Most likely because they got in bed with Google, Apple and the garbage that is Matter.


What’s wrong with Matter?


It didn't solve any of the issues it proclaimed to. And, if you look across open platforms (e.g. HomeAssistant) it has, comparatively, low uptake on available integrations because Matter is a vehicle for proclaimed interop by walled garden experts (e.g. Apple, Google, etc).


Could you be more specific?

I've got several Matter smart plugs and a couple Matter smart bulbs.

They all were quick and easy to set up with their first Matter controller (an RPi4 running Home Assistant or an iPad with Apple Home), and quick and easy to add to whichever controller I didn't use as the first controller.

They all worked then without requiring me to get their manufacturer's proprietary app or make an account or anything like that.

Some needed a firmware update to support Matter 1.3, and so I had to use the manufacturer's app for that. Some also have proprietary functions and options (for example one of bulbs supports some kind of presence detection if you have at least two of those bulbs in the same room) so I might get the manufacturers app if I decide I want to use those functions.

Adding them to the manufacturer's app does not interfere with their use as Matter devices so if I do decide I want to use some of the proprietary stuff it doesn't break things.


1) If you have to use a manufacturers app for updates that's already falling into my point. 2) There are plenty of threads out there discussing manufacturers that leverage Matter but they force their own controller to be able to be used. A lot of these are together at builders as another revenue stream for them.

Finally... This [0] does a better job of explaining the issues with Matter. But, Matter is ultimately a joke. It was promoted as a standard by vendors nobody should trust for interoperability at this point.

[0] https://community.home-assistant.io/t/if-matter-is-a-suppose...


Pairing worked flawlessly with my Tado X thermostats, to add another anecdotal data point.


You can most likely use Vuescan, I use that with an old ScanSnap i500 (or something)

[1] https://www.hamrick.com


I have Vuescan and it’s not even close.


Love VueScan for my film scanner!


A lot of upvotes, but no discussion :)

How would you approach migrating a ~5 TB OLTP database that fundamentally constain analytical time series data? I’d think e.g., Apache Iceberg could be a better data store, and make writing much easier (almost just dump a parquet in there)

It’s exposed to the outside world via APIs


That 5 TB of data will probably be 3-400 GB in Parquet. Try and denormalise the data into a few datasets or just one dataset if you can.

DuckDB querying the data should be able to return results in milliseconds if the smaller columns are being used a better if the row-group stats can be used to answer queries.

You can host those Parquet files on a local disk or S3. A local disk might be cheaper if this is exposed to the outside world as well as giving you a price ceiling on hosting.

If you have a Parquet file with billions of records and row-groups measuring into the thousands then hosting on something like Cloudflare where there is a per-request charge could get a bit expensive if this is a popular dataset. At a minimum, DuckDB will look at the stats for each row-group for any column involved with a query. It might be cheaper just to pay for 400 GB of storage with your hosting provider.

There is a project to convert OSM to Parquet every week and we had to look into some of those issues https://github.com/osmus/layercake/issues/22


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: