Hacker Newsnew | past | comments | ask | show | jobs | submit | luismarques's commentslogin

It's just exposes a UART (serial port) to USB device. With the right driver you'll get a serial port (ttyUSB, COM port, etc.) in your OS/


It also happened to me. It was a Firefox compatibility issue, it played with Chrome.


This will come in handy. Thanks!


Author here. Ask me anything. Feedback also welcome.


If you want to check how this idea is taken to way more sophisticated levels than this, then check out D's ranges and algorithms. This article only covers the equivalent of input iterators / ranges. You can also find in D sophisticated ways to deal with the last part of the article, regarding how to ascertain the different capabilities of your range/type, in ways that go beyond the traditional type system and OOP concepts.

edit: (also, D's lazy keyword, which performs the transformation described in the article automatically)


It also wouldn't be a complete discussion of the subject without at least mentioning Haskell, in which everything is lazy by default:

    numbers = [0,1,2,3,4,5,6,7,8,9]
    take 5 numbers
    -- [0,1,2,3,4]

    numbers = [0..]
    take 5 numbers
    -- [0,1,2,3,4]

    evenNumbers = map (* 2) numbers
    take 5 evenNumbers
    -- [0,2,4,6,8]

    last evenNumbers
    -- <<infinite loop>>
I used simple list stuff in this illustration, but everything is lazy. I/O most notably (and sometimes problematically). Nothing runs until it's forced[0], and none of this requires any special annotation or plumbing.

----

[0] Yes, I know. But it's subtle, and we're in public here. https://wiki.haskell.org/Lazy_vs._non-strict


Haskell's laziness can also be a notorious source of "space leaks": an algorithm that appears O(1) in space, such as summing up a list of numbers, can actually use O(n) space by accumulating unevaluated invocations of (+) in memory. With more complex data structures, the proportion of expected memory usage to actual memory usage can get even worse.

In larger Haskell programs, I've found that the most challenging issue to debug: "why does my program use way too much memory?".


Practically, the way to go here is: make your data structures strict (with "!") and make your control structures lazy.

So, don't create a list of numbers if you intend to sum it, use a non- lazy data structure.

The trick is that Haskell's common default structures are lazy.


I think he meant that the decoding latency is 1 cycle, not that per 1 cycle the core can only decode one instruction.

That is, each baby takes 9 cycles to form, but per 9 cycles the population can have more than one baby.


I used to use D for larger tasks and Python for quicker ones, like processing some text file and so on. One day I realized that I prefered using D even for those smaller tasks, where quick and dirty solutions would do. One thing that helped was that the standard algorithms (from the std.algorithm module) are really useful and compose very well, once you get to know them, and allow solving those kinds of tasks both quickly and efficiently.


Try programming with (std.)ranges and (std.)algorithm's. It's something completely refreshing, replacing a mess of loopy code with a clean pipeline of algorithms. The lazy nature of the standard algorithms and the clean syntax you get with the UFCS feature produce some really neat results. Even if you end up not using D any further, it can change your view of programming.


Yeah, it's a lot like lisp in that regard. I'm glad I learned D even though I don't use it professionally if only because it changed the way I look at some things. The algorithm chaining enabled by UFCS and the range based standard library can lead to some very beautiful code (at least as far as C-family languages go). It also made me painfully aware of how often I copy strings in C++ (string_view cannot come soon enough).

Here's a snippet of code I hacked together in D for a bot to scrape titles from pages of urls in irc messages.

    matchAll(message, re_url)
              .map!(      match => match.captures[0] )
              .map!(        url => getFirst4k(url).ifThrown([]) )
              .map!(    content => matchFirst(cast(char[])content, re_title) )
              .cache // cache to prevent multiple evaluations of preceding
              .filter!( capture => !capture.empty )
              .map!(    capture => capture[1].idup.entitiesToUnicode )
              .map!(  uni_title => uni_title.replaceAll(re_ws, " ") )
              .array
              .ifThrown([]);
It uses D's fast compile-time regex engine to look for URLs, then it downloads the first 4k (or substitutes an empty array if there was an exception), uses regex again to look for a title, filters out any that didn't find a title, converts all the html entities to their unicode equivalents (another function I wrote), replaces excessive whitespace using regex, then returns all the titles it found (or an empty array if there was an exception). There's stuff to improve upon but compared to how I would approach it in C++ it's much nicer.


This looks pretty cool! I think I know what I am going to do over my next vacation! :)


Isn't this (just) dithering noise?

AFAIK, the reason it's so effective in the example is that adding the noise helps the quantization process in the posterization better represent the original color spectrum. Without the dithering the quantization error can keep adding up in a way that the posterization filter cannot control (but which the image author can engineer to be problematic, as surely was the case here). With the dithering you have a statistical guarantee that the quantization errors average out.


Yes, as described in the article, this is just dithering. I've never heard of "stochastic resonance", but from what I can discern from the wikipedia article, it's essentially the same thing except applied to systems that are merely "nonlinear" and "bistable", as opposed to outright quantized.

It appears we are not the first to note the connection: http://www.ncbi.nlm.nih.gov/pubmed/11046260


In effect yes they are essentially the same thing. One of the (potentially annoying) things you'll see going through the literature is that the term has very broad meaning. Other times it'll be a very narrow term. The broadest is basically "random noise can be used to improve signal." Under that definition, dithering would be a form of stochastic resonance. This biology article actually touches on the definition issues.(http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...)

There's also a case to be made that the definition of SR should be narrowed, and a lot of what's called resonance isn't resonance at all.(http://www.nipslab.org/files/PRE1995-SR-and-dithering-p4691_...)


> With the dithering you have a statistical guarantee that the quantization errors average out.

Could you point me towards somewhere this statement is made precise?


I don't know where to link you to, but here is a more detailed statement, which I think could be straightforwardly expanded into something precise.

Consider a signal S[i], i=1...N. The human eye isn't actually perceiving S[i], it's perceiving some convolution of it S[i] \conv w[i] (for a window function w). I.e., an area with 50% white pixels and 50% black pixels appears grey.

Suppose for simplicity w[i] = 1/k on i=0...k.

Now add noise g[i] to the signal in a region where S[i] = alpha. Then S[i] + g[i] = alpha + g[i]. The number of pixels above a threshold T within the window are then 1-cdf(T-alpha), where cdf is the cdf of the distribution of g.

Assuming your cdf is approximately linear near T, then 1-cdf(T-alpha) \approx C + alpha.


http://xiph.org/video/vid2.shtml

This video (23min) explains, among other things, how dithering of audio signals works in the frequency domain. Note: the main subject of the video is about digital vs analog signals, but he explains dithering as well. It's also just a very well done video, I like the way he presents and explains things.


Do you know what those cloverleaf shaped metal beams are called, and how you can learn more about building hardware prototypes with those kinds of products?

I know very little about mechanical engineering and hardware prototyping, but I saw those metal thingies about a year ago in a DYI tinkerer community (it was used in a DYI 3D printer), and I have been wondering about that topic ever since.


Those metal beams are called aluminum extrusions and are sometimes referred to by the brand name "80/20".

They're super fun and useful in prototyping - kind of like an erector set for adults.


Thanks! What other items often go together with these aluminum extrusions? Is there a place or a book to learn about this topic, or is it something that people only learn through experimentation and mimicking?


In addition to the 8020 beams already mentioned, there are also MakerBeam (http://www.makerbeam.eu/) and OpenBeam (http://www.openbeamusa.com/).

Primarily they are used for building structures quickly and easily - a saw and a wrench are the only tools you need. The standardized brackets for each beam type allow you to make 90 and 45 degree angles.

But they are often used for more than just framing. The 3D printing community has embraced extrusions because you can also use them as bearing surfaces, mount motors and servos, limit switches, etc. Basically anything that has a hole big enough for a machine screw can be mounted to a beam either directly or through an easily made mount (usually to get the angle that you want - all it takes is some sheet metal).

The quickest way to learn is to look at examples. The OpenBeam website has lots of examples. The system is so simple that you can understand exactly what is going on just by seeing a picture.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: