Hacker Newsnew | past | comments | ask | show | jobs | submit | sbergot's commentslogin

Oauth defines bearers tokens without requesting them to be jwt.


interestingly other people are answering the opposite in this thread.


They're wrong.

From ECMA-404[1] in section 6:

> The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs.

That IS unambiguous.

And for more justification:

> Meaningful data interchange requires agreement between a producer and consumer on the semantics attached to a particular use of the JSON syntax. What JSON does provide is the syntactic framework to which such semantics can be attached

> JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of number types of various capacities and complements, fixed or floating, binary or decimal.

> It is expected that other standards will refer to this one, strictly adhering to the JSON syntax, while imposing semantics interpretation and restrictions on various encoding details. Such standards may require specific behaviours. JSON itself specifies no behaviour.

It all makes sense when you understand JSON is just a specification for a grammar, not for behaviours.

[1]: https://ecma-international.org/wp-content/uploads/ECMA-404_2...


> and does not assign any significance to the ordering of name/value pairs.

I think this is outdated? I believe that the order is preserved when parsing into a JavaScript Object. (Yes, Objects have a well-defined key order. Please don't actually rely on this...)


In the JS spec, you'd be looking for 25.5.1

If I'm not mistaken, this is the primary point:

> Valid JSON text is a subset of the ECMAScript PrimaryExpression syntax. Step 2 verifies that jsonString conforms to that subset, and step 10 asserts that that parsing and evaluation returns a value of an appropriate type.

And in the algorithm

    c. Else,
      i. Let keys be ? EnumerableOwnProperties(val, KEY).
      ii. For each String P of keys, do
        1. Let newElement be ? InternalizeJSONProperty(val, P, reviver).
        2. If newElement is undefined, then
          a. Perform ? val.[[Delete]](P).
        3. Else,
          a. Perform ? CreateDataProperty(val, P, newElement).
If you theoretically (not practically) parse a JSON file into a normal JS AST then loop over it this way, because JS preserves key order, it seems like this would also wind up preserving key order. And because it would add those keys to the final JS object in that same order, the order would be preserved in the output.

> (Yes, Object's have a well-defined key order. Please don't actually rely on this...)

JS added this in 2009 (ES5) because browsers already did it and loads of code depended on it (accidentally or not).

There is theoretically a performance hit to using ordered hashtables. That doesn't seem like such a big deal with hidden classes except that `{a:1, b:2}` is a different inline cache entry than `{b:2, a:1}` which makes it easier to accidentally make your function polymorphic.

In any case, you are paying for it, you might as well use it if (IMO) it makes things easier. For example, `let copy = {...obj, updatedKey: 123}` is relying on the insertion order of `obj` to keep the same hidden class.


In JS maybe (I don't know tbh), but that's irrelevant to the JSON spec. Other implementations could make a different decision.


Ah, I thought the quote was from the JS spec. I didn't realize that ECMA published their own copy of the JSON spec.


Except it is biased in its conclusion:

> However, unlike our OOP example, existing code that uses the Logger type and log function cannot work with this new type. There needs to be some refactoring, and how the user code will need to be refactored depends on how we want to expose this new type to the users.

It is super simple to create a Logger from a FileLogger an pass that to old code. In OOP you also need to refactor code when you are changing base types, and you need to think about what to expose to client code.

To me option 1 is the correct simple approach, but the author dissmisses it for unclear reasons.


I am not an expert but is the dark matter theory testable?


Of course it is. It already passed many tests (for example gravitational lensing) while some dark matter candidates (WIMPS, primordial black holes) have effectively been ruled out through tests.


Dark matter is not a theory, per se. There are many, many theories that attempt to explain dark matter. Some of them have yet to produce testable hypotheses, others have already been tested.


Thank you. Dark matter is the issue in cosmology that "it appears as though undetectable matter is present in the universe causing X, Y, Z phenomenon."

The issue that I have with people calling dark matter a theory is that they think it requires matter to solve. It doesn't. MOND is a dark matter theory. It explains(in part) why it appears as though undetectable matter is present in galaxies causing disc velocity to not match expectations.


It's not, but it's accepted as it is the theory that best fits the observations. It has holes, but not as much as others. It will continue to be the accepted model until another one is an even better fit to the data or we can prove/disprove the existence of dark matter.


I agree. There are enough standard places to put metadata in a website.


Grid has a lot of features but flex is simple enough and still very powerful


A simple example is the Result<T> which could have either Ok(T value) or Error(string message). In order to get the value you need to switch over both cases, forcing you to handle the error case locally.


Do we? Is the global population about to collapse? Isn't an increase of global population going to cause more issues?


Define collapse. The world population is on track to start declining starting in the next few decades. However with 8-9 billion humans how large a loss would you require before you call it a collapse is a debatable question.

Increase in global population will continue for a few decades, but there are more old people than young people and it seems unlikely that we will get to a phase where births are high enough to counteract coming the death by old age in the near future.


I believe this is a good thing even if it will cause many issues. Humanity will have to live on earth for a long time and infinite growth is not sustainable. So at some point the population has to fluctuate. We will have to accept this reality.


The other massive contribution of steam is the discoverability you get.


When you accept donations you have to be transparent about how it is going to be used. They cannot change their mind like that.


They can't do it retroactively for already received donations (not ethically at least, I don't think it would be illegal in this situation), but it wouldn't at all be a problem for them to announce "donations from today on will be used in this new way" instead of making the announcement they just made.

(I personally think they made the right choice, am just responding to your comment disagreeing that it would be a transparency issue if they changed how things worked moving forwards.)


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: