> When handing out references of something bound with let mut, why do i need to do &mut instead of just & ?
Sometimes, you still want to immutably borrow a mutable variable. Otherwise, you wouldn't be able to have multiple references to a variable declared with `mut`, which would be an unnecessary limitation.
EDIT: Here's an example of something that wouldn't otherwise work if borrows of mutable variables were implicitly mutable:
struct Point {
x: i32,
y: i32,
}
fn add(p1: &Point, p2: &Point) -> Point {
let x = p1.x + p2.x;
let y = p1.y + p2.y;
Point { x: x, y: y }
}
fn main() {
let mut some_point = Point { x: 1, y: 1 };
some_point.x = 2;
// `some_point` is borrowed twice here, which wouldn't be possible if the borrows were mutable.
let other_point = add(&some_point, &some_point);
println!("({}, {})", other_point.x, other_point.y);
}
The question isn't why does the function decleration need to have &mut, it's why does the call site need to have &mut?
as in assume you had:
fn reset(p: &mut Point) { p.x = 0; p.y = 0; }
Why can't this be called like so:
reset(&some_point);
rather than needing to say
reset(&mut some_point);
To some extent it's nice that the callsite must document the behavior visually, but its more forcing a style guide rather than anything that's required to compile & work.
> To some extent it's nice that the callsite must document the behavior visually, but its more forcing a style guide rather than anything that's required to compile & work.
I've always figured that was the reason. It's immediately obvious at the call site that a borrow is happening, and that it is a mutable borrow. Otherwise, if you wanted to know, you'd need to inspect each of the function's signatures that the variable is passed to.
(Compare, perhaps, to C++ references, where it isn't apparent at the call site at all that your variable might change.)
When you use `&mut some_point`, (i.e. immutably borrow "some_point"), the compiler requires that "some_point" is not borrowed anywhere else at the same time. In order to enforce that invariant, it needs to know what borrows are mutable and what borrows are immutable.
Perhaps what you're wondering is "why can't the compiler infer when a borrow needs to be mutable instead of having the programmer specify it"? I can't say for certain what the answer to this is, but my guess is that it would be rather expensive. Consider this example (reusing the "add" function from above):
let mut foo = Point { x: 1, y: 2 };
let bar = &foo;
add(&foo, &foo);
...
// Much later
reset(bar);
Under the proposed "mutable reference inference" rules, should this compile? It's not necessarily trivial to determine this (especially if the "..." contains a lot of code), but it should not, since `bar` must a be mutable borrow to be able to pass into `reset`, which means that the borrows of `foo` to pass to `add` shouldn't be allowed. This means that the compiler would have to a rather expensive lookahead to determine whether any borrows are valid (and some people already complain that Rust compiles too slowly!). Forcing users to specify when borrows are mutable makes the compilers job much easier, not to mention the readability benefits that comes from making mutability explicit.
Borrow checking happens quite late in the compilation process. It has to, because it can't know the lifetimes of the borrows until all the functions are resolved (borrows are allowed to leave the scope of the call via requirements imposed by functions, even to the point of being required to be static). If this choice was made for technical reasons, this is not why.
Considering how seriously Rust takes explicitness in general, I strongly suspect this was done purely for readability.
If struct A implements trait S and trait T, and there are two functions
fn foo(x: &S);
fn foo(y: &T);
in scope, then the compiler needs you to disambiguate foo(&a) if a: A. This isn't a whole lot different than disambiguating the same expression if the functions are:
My best interpretation of his question was more along the lines of: why do you have to mark it as `mut` if you aren't going to change it in the function? Suddenly your function only accepts mutable pointers, not just any pointer.
Rust deliberately doesn't do interprocedural analysis, so function signatures are the ultimate source of truth. If you want to call a function that takes a `&mut` then all the rules for taking a `&mut` apply. If the function that takes a `&mut` doesn't need a `&mut`, however, then that's probably just bad API design, and is something to be avoided (I've never seen any such thing in the wild; the guideline in API design is to make your function signatures as unrestrictive as you can, and `&mut` is as restrictive as it gets).
To me it lets me know if an object will be changed by the function or not. If it's just & I know that my object is just how I left it. If it is &mut I need to think about how it was changed.
if it's just &, the compiler knows the pointee is just how it left it[1], and it can optimize on this assumption. For example, if it had the contents in a register before the call, it knows they're still valid after the call. This is the same sort of thing "restrict" gives you in C, but unlike "restrict", using & is idiomatic and the compiler will spot mistakes.
If it's &mut, those assumptions aren't valid.
[1] except, iiuc, the contents of UnsafeCells (and things built on top of them, such as RefCell).
Well the compiler knows the signature of the function you pass it to so it should be able to optimize it anyways.
I think in most if not all cases the compiler can figure out if &thing needs a mutable or immutable reference. I think the only difference is the readability to the programmer and I agree with the decision made requiring &mut for a mutable reference.
Sorry, I typed the wrong word. I've updated the parent to be less confusing. s/compile/optimize/
For reference the first line used to be: "Well the compiler knows the signature of the function you pass it to so it should be able to compile it anyways."
> &T vs &mut T is very significant in Rust
Yes, especially for function signatures it is critical, however the ancestor was talking about using & to take a reference. When using & to take a reference I don't think &mut needs to be required. The compiler could probably figure it out.
Oh, I'm talking about having the type of reference declared in the function signature. Yes, I agree repeating it in the invocation is for the programmer's benefit. Rust doesn't have overloading, so there's no ambiguity.
I took that to mean that Scala's influence on Rust is non-obvious. I might describe Scala's general design philosophy as "exuberant", whereas Rust has historically tried to have as little machinery as possible to achieve its safety goals. Scala's much more flexible, much less explicit, and much more TIMTOWTDI.
Thanks. I've done scala for a few years and only dabbled in rust but it certainly seemed plausible that various decisions in rust, about what to do and not to do, had been informed by scala. Glad to have it confirmed.
This is all part of "destructuring pattern matching". Let's say we have:
struct Something<T>(T);
let x = 42;
When we say:
let y = Something(&x);
...we're constructing a value of a particular type on the right-hand-side of the `=`. Written out, it would be something like "take the value x, calculate its address in memory, and wrap that reference in Something()".
let Something(&z) = y;
...does the reverse operation. Written out, it would be something like "take the value y, remove the Something() wrapper to get the value inside, dereference the pointer to find the value pointed to, then assign a copy of that value to the variable z". At the end, z == x.
Note that the last step is copying a value: it only works if value is of a type that can be safely memcpy()'d (in Rust terminology, if the type implements the Copy trait). In this case, the value happens to be the integer 42, and integers are Copy, so it all works out. However, not all types are Copy, so if you can't copy the value, what happens?
If the value you're destructuring owns the inner value, it's easy—you just become the new owner:
let x = Something(123); // x owns this Something instance.
let y = Something(x); // the value from x is moved into y.
let Something(z) = y; // the value inside y is moved into z.
On the other hand, if the value you're destructuring doesn't own the inner value, you can't just take ownership of it. If you want a handy name to use when talking about that value, it's going to need to be a pointer.
let x = Something(42); // a value that is not Copy.
let y = Something(&x); // a reference, so we can't take ownership
//let Something(&z) = y; // Can't take ownership of x!
let Something(ref z) = y; // z is a reference to x
let x = Something(42); // a value that is not Copy.
let y = Something(&x); // a reference, so we can't take ownership
let Something(ref z) = y; // z is a reference to x
z is a reference to &x, and is thus of type &&i32. You should normally just do
Having done scala for a few years and only dabbled in rust, I also get the impression that the rust community is trying to have good default answers to questions like how to do json, or which web framework to use. As opposed to scala-land where if you ask three scala devs those questions you'll get five answers.
That said, it seems more common in Rust for there to be a consensus about which library to use. Perhaps this is simply due to its age? I've seen a few libraries recommend you migrate to others because the pace of Rust conceptual evolution made the first library outdated in short order This is somewhat less so now, though very recent features like custom derive and even impl Trait can affect dramatically the aesthetics of a library and how someone chooses to implement it.
> If you lean more towards the functional programming paradigm side of Scala then you’ll probably love the following about Rust’s type system
Some of the more popular functional languages have advanced type systems, but not all functional languages are statically typed. Lisp, the granddaddy of all FP languages, is dynamically typed. So is Elixir.
Static type systems can greatly hinder FP. Or to be more specific, language semantics and runtime behavior (encoded by the type system) can influence how powerful your FP is - and by "powerful", I mean how much boiler plate you need and how much you can generalize things.
So yes, having certain characteristics of type systems do help with FP while not being core to FP. It can certainly make things more pleasant though.
I'm all for that, but first we probably need to actually define what "functional programming" means, which seems to be one of the great unresolveable debates of computer science. :P
To wit, I can think of no reasonable definition of functional programming that includes Lisp, Scheme, ML, and Haskell that does not also include every popular language aside from C. Even Java has first-class functions these days, sort of...
I tend to use the term "function-oriented" rather than "functional" since I think it fits the spirit of the phrase better. You're right that language features alone won't distinguish enough, and that means that functional programming is more about culture and standard practices. Once you embrace that, it's a bit easier to wrangle out a definition.
The definition I have in my head is something along the lines of "functions are the tool that the language prefers to address common software design concerns." So for example, Encapsulation is a big deal when designing software. Object-oriented languages will address this concern with their object system (via accessibility modifiers), while function-oriented languages tend to address this with functions, via lexical scoping and closures. OOP languages use their object system to achieve runtime-dynamic behavior through inheritance or composition or interfaces, while function-oriented languages will more often try to pass first-class functions as arguments. And so forth.
It's not a 100% thing. Common Lips will solve some of its problems with macros or its module system, and ML will solve some of its problems with types. But you do find the languages (or communities) prefer functions over their object systems, while e.g. Python or Ruby prefer the reverse.
Although, I do find myself wondering if Haskell should actually be considered a type-oriented language. Again, it's not just about the features of the type system, it's the fact that the type system is the main tool that it tries to use to solve every problem about designing software.
We see a lot of definitions of functional bandied about, often conflating functional with Haskell or with Scala. Those definitions seem to ignore the history of functional programming, including Lisp, as qsymmachus noted. So that's silly. And conversely, one sees a lot of definitions of OO that appear to mean "Java" or "C++". Again, nonsensical when one considers Self, or to a lesser degree Smalltalk.
I do like your conception wherein a language is function to the degree to which it prefers functions as its favourite tool for solving problems, and/or OO to the degree to which it prefers objects as its favourite tool.
Regarding encapsulation as your particular example, OO is clearly fairly obsessed with encapsulation, being almost the definition of an object: a bundle of encapsulated state with the externally-facing behaviour that that object affords. I think that many functional communities/practitioners see this as a much less urgent issue, often going so far as to make no active effort toward encapsulation at all (i.e. not everyone bothers with the lexical-scopes-and-closures solution). Thus we get ideas like "smart functions, dumb data", an idea which makes no sense in OO land.
What's overblown in my opinion is the treatment of "functional" or "object-oriented" programming as a feature of languages and not programs (though certain languages definitely make it easier).
> We can model the world as a collection of separate, time-bound, interacting objects with state, or we can model the world as a single, timeless, stateless unity. Each view has powerful advantages, but neither view alone is completely satisfactory. A grand unification has yet to emerge.
I hear you brother. And I should add that I don't believe in static typing being a Great Thing to Have. Strict typing is a good thing, but Static typing?
I feel Static typing is like those little wheels fitted to bicycles so kids don't fall down when riding a bike. Yes, you can never fall down when using those wheels, but (a) your cornering ability will be slower and (b) you still can crash against something, fall into a pit, etc...
> Lisp, the granddaddy of all FP languages, is dynamically typed.
Lisp is a multi-paradigm language of which FP is one of them. You're referring to Standard ML or Haskell and that should explain why people conflate FP with static typing.
I think this was more to point out that for those who like the functional aspects of Scala's type system with respect to Java's, but don't like being hamstrung by the need for Java compatibility, Rust has the goods. I didn't read it as espousing static types for FP a priori.
two questions: from the perspective of a Scala dev, does Rust have a solid collections library (including persistent structures such as Scala's Vector type) with mutable counterparts (eg, ListBuffer)?
second, from a quick glance it seems Rust's top-level concurrency model (locks, thread ownership) is the polar opposite of Scala's (Actor-based)--is this correct? Scala's concurrency model (based on Erlang), along with trait-based multiple inheritance, were Scala's initial selling points for me
Rust has no top level concurrency model. The standard library provides an abstraction over OS threads, and that's about as much as there is. The ownership model is sort of concurrency-model-agnostic.
Currently the popular Rust concurrency model is Futures. It won't be the only concurrency model used.
Is actor-based really considered top level for Scala? Akka is still a framework and not in the language itself, I'd argue the core concurrency is Java-based thread pooling enriched with Scala's Future type.
That said, I do agree that Akka is heavily favored as the ideal in Scala development as part of Lightbend's take on the Scala ecosystem, and maybe that was mostly your point.
Actors (not akka's) used to ship as part of the standard library. I'd argue it's been the killer app that drove many people in, in Scala's early days. At some point Akka grew bigger and matured so the original actor library was spun off in its own library, basically deprecating it in favor of akka.
Rust doesn't have a good persistent collections library, they're hard to do without a GC. On the other hand, the borrow checker makes it easier to deal with shared mutable data so there is less of a need for it.
I'm guessing you're asking if Rust has a REPL? Currently there is no REPL included in the standard language, but you can download the rusti project [1] using cargo.
Macros are easier to do with Rust (than in Scala).
IDE: happy with MS Visual Studio Code
Syntax is not too different.
Type system: He likes it.
Calls C code with a similar syntax than doing it with Scala
Rust allows you more control on how to use your memory allocation. (Or forces you to take control).
-------
My own take on Rust (haven't used the language yet, only looked at specs) is that it appears to be a more "sane", state-of-the-art version of C++, streamlined, without all the cruft and all the legacy. And with a much better object oriented system based on Traits.
>> it appears to be a more "sane", state-of-the-art version of C++
Some time ago I commented on how horrible C++ has become and said it would be nice if someone made a new language with all the efficiency and features of C++ but nicer to program in. Someone said essentially "that's Rust". You are another confirmation of that. I have yet to take the plunge but hope to some day.
I had a similar conversation with someone who I consider an expert in C++ after a few months of working in C++ for the first time, and I said basically the same thing you did (only already having Rust in mind when I made my comment). To my surprise, he mentioned Rust immediately, instead of my having to bring it up. Having learned C++ after Rust, I had kind of assumed that my view of Rust as a sort of reboot of C++ wasn't going to be shared by someone with so much C++ experience.
To be fair, Objective-C has existed since 1984 and it adds nice OOP to C, that is, Smalltalk-style OOP.
However, i must say that when I was first introduced to C++, having been totally familiar with C, i thought that Bjarne Strostrostrostrostrostro
*** - Program stack overflow. RESET
... Strostroup was a genius. I think C++ was a nice idea, but ended up suffering from the "yeah, let's add this feature too" syndrome. I really wonder if there is one person in the earth that actually knows the whole of the C++14 language.
'Nicer' is quite subjective. There are definitely people who won't appreciate Rust's sophisticated type system, functional idioms, or obsessive control over mutability.
> And with a much better object oriented system based on Traits
Note that Traits are a system for generics specifically. You can have useful OO without generics (C++ is useful without templates, go does't have them at all).
Traits go a bit beyond generics, in my view. You can say i'm a "smug Lisp coder", but after having used CLOS (Common Lisp Object System), i'd say that generics are one of the most useful things you can have in OOP.
With Traits you can separate implementation of certain methods (implementation of Traits) from object definitions. Which for me is excellent, and approaches CLOS' way (in CLOS methods are not directly coupled to a particular object; methods are applied to the combination of one (or more) object(s).
In a similar way as English language; where you apply an action involving many subects: "Angel transport his luggage using a car." Here the 'transport' action can't really be bound to the Person class, nor the Luggage class. Perhaps it belongs on the Car class, but with CLOS you can just define "transport" to act on the combination of <actor> <object to be transported> and <vehicle>.
"Transport", thus, is a "generic method" and can be defined (implemented) for all kinds of class combinations.
With Traits you can achieve something more or less similar, by defining the necessary Traits for each kind of class, to be able to have all of them "transport" harmoniously. So Traits can link (and then impement) the operations performed between classes, without having to bound the code to a particular class.
In Rust, you can have functions/enums/structs that are generic without traits, and you can have traits that aren't generic. Generics and traits are orthogonal features.
The standard library's own `Vec` type is an example of a struct that is generic without traits. See the source: https://github.com/rust-lang/rust/blob/4ed2edaafe82fb8d44e81... . In fact, almost no struct or enum uses traits in combination with generics. On the other hand, generic functions do almost always leverage traits, because otherwise you basically can't do anything in the function with the generic parameter.
However, note that there does exist a very important generic function in the stdlib that doesn't use traits at all: `std::mem::drop`. Here's the source, which is very, very interesting: https://github.com/rust-lang/rust/blob/4ed2edaafe82fb8d44e81... . And no, I'm not being facetious when I call it interesting; figuring out what `std::mem::drop`'s purpose is from its amusing implementation is a test that you understand what ownership means in Rust. :)
That's the way round I meant. Yes you can have generics without traits (particularly with data structures as you've pointed out). Not sure of anything that uses traits without generics.
> Not sure of anything that uses traits without generics.
Traits do a lot of things in Rust that have nothing to do with generics. :) Implementing a trait is how one gives a type a destructor. Implementing various traits (the ones in std::ops) allows one to do operator overloading for user-defined types. Defining a trait allows one to give methods to types that are defined in other compilation units, including the standard library, e.g.:
But those are excellent examples of the use of generics. They allow you write code that can operates on a generic object, not know what type it is, but know that you can read from it.
Oh, I think I see what you're saying. If I'm understanding correctly, then you're saying that the types that traits define are only used with generics? This isn't quite correct, as you can use trait objects to get dynamic dispatch rather than using generics. But yes, in general, functions that are defined to take traits as parameters generally use generics. I'm not sure I'd still consider the idea of traits to be inherently tied to generics, though; the Iterator trait, for instance, is usually just used to call Iterator methods on different types rather than as a way to bound the types of function parameters.
Has no trait bounds, and is clearly generic. You typically can't do very much with completely free type variables, but Rust has syntactic support for them.
I personally don't feel Rust fits into either of the two big schools of OOP, but that doesn't mean that it's not OO. Part of the problem with OO is that the definitions are hard.
I tend to think of OO as a spectrum rather than a binary thing, and I think that Rust certainly belongs somewhere on it. I'm not convinced there's any language that can be considered "completely object-oriented"; Java has primitives, and Python, while considering everything an object, is perfectly content to let you use functions and variables without writing a single class. JavaScript doesn't even have classes, but I think it still can be considered to be at least partially OO. I might just have a more liberal than average interpretation of what OO means, though.
I think that's actually an unreasonably strict interpretation of OO, actually.
First, lack of classes is no reason to eliminate JS, it's not Class-Oriented programming. (not that I think JS is particularly OO; certainly not "completely OO")
Second, it's not Object-Only programming, so I'm not sure why you'd rule out any language with primitives. Much as I detest Java-style OO, I think it's very clear that its orientation is towards objects.
And third, even if you want to stick to those two rules, what about Ruby? Every value is an object, it has classes, and you can't write functions (only methods).
I didn't mean to give the impression that I didn't consider JS to be object-oriented, although I realize now I wasn't quite as clear as I originally intended. When I said "at least partially OO", I meant that I wouldn't agree with an assertion that said JS isn't OO at all, not that it's only partially OO. After all, Rust doesn't have classes, and my original point was that I think Rust is somewhat OO.
Similarly, I wasn't saying that Java isn't OO either; I was attempting to make the point that I could see plausible arguments saying that either Python or Java is "more OO" than the other. Personally, I consider Java to be more OO than Python, but I'm not sure I could make a compelling counterargument to the idea that Python is more OO due to everything being an object.
As for Ruby, I'm not sure I understand what you're saying; Ruby lets you just write top-level functions and variables just like you can in Python. I also think you might have misinterpreted my giving examples as being "rules"; I certainly can't come up with any objective rules to determine whether a language is OO or not. I must not have made myself very clear, but the main idea behind what I was trying to convey is that I don't think anything is "completely" object-oriented and that plenty of languages fall somewhere on the spectrum of OO languages.
I agree, but that doesn't matter to a certain segment of the Rust community. They will go to great lengths, including redefining what OO is, to disassociate their language from OO for whatever reason.
Sometimes on say, /r/rust, they get questions from people wondering how to "do OOP" with Rust, by which they tend to mean "traditional", inheritance-style OOP. This leads to Rustaceans having to explain "no, that's not exactly how it works here". I'm not sure that's what GP is talking about, and I can't think of a time when I saw someone really aggressively dissociate from OOP, but it's easy for me to believe that some people might take it too far.
Sometimes, you still want to immutably borrow a mutable variable. Otherwise, you wouldn't be able to have multiple references to a variable declared with `mut`, which would be an unnecessary limitation.
EDIT: Here's an example of something that wouldn't otherwise work if borrows of mutable variables were implicitly mutable: