If you're the kind of person that's interested in taking up this challenge, but you currently have the coding skills without the deep learning skills, we built something that can equip you with most of the current best practices in deep learning in <2 months: http://course.fast.ai/ . It doesn't assume anything beyond high school math, but it doesn't dumb anything down (key mathematical tools are introduced when required, using a "code first" approach).
We don't charge anything for the course and there are no ads - it's a key part of our mission so we give it to everyone for no charge: http://www.fast.ai/about/
And yes, it does work. We have graduates who are now in the last round of applications for the Google Brain Residency, who are moving into deep learning PhDs, who have got jobs as deep learning practitioners in the bay area, etc: http://course.fast.ai/testimonials.html . Any time you get stuck, there's an extremely active community forum with lots of folks who will do their best to help you out: http://forums.fast.ai/ .
(Sorry for the blatantly self-promotional post, but if you're reading this thread you're probably exactly the kind of person we're trying to help.)
thrawy45678: I see a lot of criticism about tmux and other non-core items being included in the overall curriculum. I think the author is trying to portray the workflow he is currently on and exposing the full tool kit he uses. I don't think he is saying - this is "THE" approach one has to follow.
derekmcloughlin: If you've no experience with ML stuff, you might want to start with Andrew Ng's course [...] Paperspace (https://www.paperspace.com/ml) have ML instances starting at $0.27 per hour.
webmaven: Any recommendations for the cheapest (but not "penny wise, pound foolish") HW setup that meets these requirements for course completion? - (answers:
https://news.ycombinator.com/item?id=13227437 )
The course is excellent, and thank you for making it and offering it for free - but a word of caution for those considering following it: Along the way you will incur not-insignificant costs for Amazon EC2 GPU instances, and, even if your instance is shut down, SSD EBS storage costs.
Edit: To be clear, I'm not suggesting it's not worth it, just highlighting that theres's more than a time commitment to budget for.
Yep, the course is free, but you'll need to pay for the computing power one way or another.
If you have a workstation with a fairly recent Nvidia GPU (I used a GTX 980 TI) and bunch of spare disk space, you don't need AWS at all. You'll still pay for the electricity, of course, but it's not what I would call a significant extra cost. That is, if you already have the hardware.
Get a GTX 1070 at a minimum. GTX 1080 Ti if you have a bit more cash to spend. Disk space is entirely dependent on your data set size. We're not talking huge numbers for most things you would be doing
I have GTX 970 and how much spare space do we need?
btw, I have a doubt, the course has specific video to setup aws, for local machines, what to do? should I install the required packages python, keras or whatever they use?
To give a more exact estimate - if you don't use spot instances ($0.20/hour) it'll cost $0.90/hour, and if you do the suggested 70 hours of work for the course, that's $63. And then there's around $6/month cost for the EBS volume.
The time you spend on the course, and the time spent by your instance running workloads are two completely separate things.
Your cost per-hour is accurate (plus a few bucks for an IP address), but the number of hours is off by an order of magnitude.
Also, I'm paying double that per instance per month for SSD using the provided scripts to build the instances. It's the smallest part of the cost, but I mention it because it can take newcomers to EC2 by surprise if they have an instance shut-down but consuming disk space.
I'm going through this course, but using Google cloud instead of AWS. I can confirm that at least the first Jupyter notebook works well for me.
I had to adapt the aws-install.sh script, but it was easy enough. I ended up using a snapshot instead of a persistent volume, as the monthly cost when you're not running is much cheaper. So I have a script to create a new instance and then restore that snapshot. It's faster than installing the dependencies each time).
Yes, but GPUs should only be consider for acceleration when there is insufficient local CPU power to accomplish something valuable. Otherwise, it's like buying a Ferrari to get groceries.
Don't buy equipment before you have demonstrated a real need for it.
This applies across basically all of life, and it's so frustrating to see people ignoring it, because what ends up happening is they use a string of 'gonnas' to justify buying stuff they don't need. Gonna get fit - buy $1500 worth of gym gear. Gonna learn electronics - buy oscilloscope, power supplies, tons of components. Gonna get your motorcycle license - buy brand new bike and stick it in the garage.
If you have a desktop computer, you're good to start. When you've done enough that your available CPU/GPU is limiting you on your own projects (not on something you pulled off github) then you can look at upgrading.
A fairly accomplished electronic engineer told me that they'd never once solved a problem using an oscilloscope, but that it helped to keep them occupied while they were mulling over what might have gone wrong. (That's presumably why the better ones have so many knobs and dials to play with, like one of those children's toys.)
I've certainly solved problems with a storage scope before, but not for a long time, and they were mostly software problems rather than hardware problems (ie. using it as a poor man's logic analyzer to infer what's going on with the code via a couple of spare IO pins). I really kinda want one though.
While I have found it quite easy to take a code-first approach to deep learning/machine learning, I have encountered a lot of scepticism from existing ML/data science practitioners about such an approach, and I feel like investors will be even more risk-resistant.
Funnily enough big tech companies have seemed the most willing to accept people making a switch. I'm guessing because their appetite for ML/DL people is currently unquenchable.
Hard to say. Universities aren't very good at instilling good applied engineering. If you are 18, very high GPA and you've passed this course and have strong programming skills, I think some investors would be interested. Similarly, if you have say a physics degree from a good university and you've done this I think investors would be interested.
If you're 22+ with a high school degree and not much of an engineering resume... Yeah, your pitch deck better be good.
Besides graduating from your fast.ai course, what were the other qualifications of those Google Brain applicants? I'm imagining they would have, or be in the process of getting, an MA/PHD in non-AI-related area.
But I really do not understand why we hate ads. I have seen many tutorials which are given out for free. Of course, I am grateful that you decided to offer the course for free of charge, but I really would not mind a little adverts just to get you as the creator/maintainer some $$.
I am a self published author of a decent intro to web development book in Go language, it is an example driven tutorial/ebook and while the main book is open source on Github, there is a leanpub.com version which I offer for variable pricing, 0$ to 20$, and it has been working great for me, rather than getting nothing for the tutorial, I am getting something.
Without ruining the current tutorial, there are ways of getting something from it, of course.
My $.02 -- if I were to provide a resource with the goal of being of great value to people, and it's within my wherewithal to maintain it with what I make elsewhere, the $$ return from ads are far too low to justify how much they detract from the experience I set out to provide. Ads aren't the absolute worst, but I think we can agree they are on average negative to the experience you visit a non-shopping page for.
For a fairly niche resource such as this (it'll never reach a "how do I get a boyfriend" level of audience), it's unlikely to ever draw as much ad revenue to pay for itself. To do so, a high quality, specialized ad system would need to be deployed, which honestly becomes a high touch deployment and maintenance project, that isn't directly tied to the core goal of just providing an awesome resource publicly for the greater good, which is just distracting (or costly) for whoever is behind it.
I appreciate the mature choice to not try to gain small change and instead eat the cost for hosting and development to feel good that you are providing something not just great, but unadulterated as well.
True, what made me write it was a YouTube channel I came across, they don't accept donations or show ads on their channel and they have crazy views, they say that we don't want to earn money on this, granted that they give it away for free, but it isn't "evil" to get some money out of it, I am not saying rip off students by charging 10000$ per session, but providing something like a PDF version of the online guide for a small amount say $5 would, in the long term, give you some return.
That's passive income which you don't have to bother about, like you'd have to bother about ad deployment. We as an industry are funny, we expect everything to be free of cost _and_ the author should not monetize it in any way possible, it isn't evil to monetize that's what I am saying.
Ads are basically a tool for psychological manipulation. It's unfortunate that this is the only practical method of monetization for some creators. Micropayments in future may help with this. To me, ads do feel disrespectful of my audience.
While ads might be that, but all I wanted to say is that it could be monetized somehow like a pdf on leanpub for 0-20$ pay what you want, I would have gladly paid.
The Coursera course is "machine learning", which is a more general terms, while this course (based on the description) is "deep learning", which is more specific - the ML course ends with Neural Networks, while this one starts with them.
I'd start with the Coursera one, if only to learn when _not_ to use a neural network and use something simpler. But if you already know how to cluster data with K-means, what a linear classifier is, and/or what an SVM is, then you can probably skip the ML one.
Could you add a simple description of the practical usecase for the lessons?
I know what "image recognition" is useful for.
I have no idea whether I need to or want to learn "CNN", "overfitting" "embedding" "NLP" or "RNN".
I am interested mostly in image recognition and text classification.
Thanks! Thanks, thanks, thanks a ton for what you are doing. As someone without a strong math background, it is kinda hard to enter the field, so I'm going to take 7 weeks now and hope I can understand deep learning better than I do now.
I love your project but I totally detest your landing page with the huge animated panels that sit there eating up cycles that could be used for something better. I'm sure it's great for conversion and all that but that doesn't stop me from being irritated.
If the course is free and you're doing this to improve the world what's the point of using such tactics? At least shut down the rotation after one or two iterations.
We don't charge anything for the course and there are no ads - it's a key part of our mission so we give it to everyone for no charge: http://www.fast.ai/about/
And yes, it does work. We have graduates who are now in the last round of applications for the Google Brain Residency, who are moving into deep learning PhDs, who have got jobs as deep learning practitioners in the bay area, etc: http://course.fast.ai/testimonials.html . Any time you get stuck, there's an extremely active community forum with lots of folks who will do their best to help you out: http://forums.fast.ai/ .
(Sorry for the blatantly self-promotional post, but if you're reading this thread you're probably exactly the kind of person we're trying to help.)