Hacker News new | past | comments | ask | show | jobs | submit login
Does Deep Learning Have Deep Flaws? (kdnuggets.com)
110 points by vkhuc on June 19, 2014 | hide | past | favorite | 54 comments



The concept is very cool, but it's not surprising that dimensionality reduction through a non-linear process is going to result in sections of input parameters that yield incorrect (and weird) results. Our visual system, while not the same as these systems, is extremely well developed and robust, yet the list of optical illusions that can fool us is quite long. In this study, the optical illusions are really just surprising because they aren't like anything that would fool humans.

This isn't to take away from the research; the most interesting result was just how close to valid inputs these erroneously classified images are.

But again, this isn't some fatal flaw. This summary completely neglects the fact that the paper also recommends that -- just like distorted images are added to training sets today (you wouldn't want something common like optical aberration from the camera lens screwing up your classifier) -- in the future, these adversarial examples should be added to training sets to mitigate their effects.

> In some sense, what we describe is a way to traverse the manifold represented by the network in an efficient way (by optimization) and finding adversarial examples in the input space. The adversarial examples represent low-probability (high-dimensional) “pockets” in the manifold, which are hard to efficiently find by simply randomly sampling the input around a given example. Already, a variety of recent state of the art computer vision models employ input deformations during training for increasing the robustness and convergence speed of the models [9, 13]. These deformations are, however, statistically inefficient, for a given example: they are highly correlated and are drawn from the same distribution throughout the entire training of the model. We propose a scheme to make this process adaptive in a way that exploits the model and its deficiencies in modeling the local space around the training data.[1]

[1] http://cs.nyu.edu/~zaremba/docs/understanding.pdf


> This summary completely neglects the fact that the paper also recommends that -- just like distorted images are added to training sets today (you wouldn't want something common like optical aberration from the camera lens screwing up your classifier) -- in the future, these adversarial examples should be added to training sets to mitigate their effects.

I've seen several articles citing this paper as proof that deep learning is deeply flawed, yet they all seem to miss the point you make above.

The other interesting result is that the neurons are not in fact individual features you can just grab and drop into another algo--the entire space defined by the model works together, through all the layers. Honestly, that was a more interesting result for me, although I don't know that it negates anything, I've just got to stop telling people that the individual units are features.

Not that deep learning is the end-all-be-all of machine learning--it's not. It's just that this paper isn't saying what reporters are saying it's saying... As per usual?

Never let the truth get in the way of a good story...


An overlapping group of authors (compared with the ones that wrote the "intriguing properties" paper) recently published a neat preprint on using adversarial networks as part of the training process. The goals of the paper are different from the networks discussed here, but I thought it was really interesting.

http://arxiv.org/abs/1406.2661


Let's not forget that the word "imperceptible" is a heavily laden term in this context. There are numerous modifications to the data that would be "imperceptible" to a machine learning system, but would completely confuse a human. For example if you were to divide the image into a grid, and shuffle the squares, many ML systems would be tolerant to this kind of modification because some training regimes do this anyway. To that system you haven't changed anything important about the image and it would correctly classify it.

What this result says to me is that there are really useful features of the data that can successfully classify images that humans are totally unaware of! And that's neat.


In the linked paper [0], they actually tested that point: after applying Gaussian noise to samples, the model could still recognize them half the time[1], despite being nearly unreadable (to me).

[0] http://cs.nyu.edu/~zaremba/docs/understanding.pdf [1] http://puu.sh/9B4eG/1e9f7eb56b.png


I disagree with your opinion. What this says to me is that DNN is not how humans classifies images.


Which would be defeating a strawman. I don't know anyone who claims DNN is precisely how any aspect of the human brain works.


The key claim, from the original paper:

> Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous [...] Specifically, we find that we can cause the network to misclassify an image by applying a certain imperceptible perturbation [...] the same perturbation can cause a different network that was trained on a different subset of the dataset, to misclassify the same input.

It's an interesting outcome -- but there are many deep-learning approaches and many different benchmarks, so it will be important to see if this is a misleading anecdote or indicative of a systematic problem.

[1] http://cs.nyu.edu/~zaremba/docs/understanding.pdf


First thought:

Can I turn all digital pictures of me into 'adversarial examples', so the eye of sauron can't identify me from pictures?

I'm sure it's not as simple as that, presumably any algorithmic modification to an 'adversarial' nature can be countered by other algorithms.

But I predict a new realm of 'arms race' here in the future.


Adversarial examples are tied to a specific algorithm, they didn't produce any universal adversarial examples from what I understand.


From the article: What’s more surprising is that the same perturbation can cause a different network, which was trained on a different training dataset, to misclassify the same image. It means that adversarial examples are somewhat universal.


Indeed, but they change the dataset and keep the same algorithm. I didn't see anything about changing the algorithm in itself. That's not very surprising: if you have enough training data, any large training set will be typical because it'll be average enough. But when you choose a specific algorithm, you can exploit its weaknesses and throwing more data won't change anything. Change the algorithm will, though.


But why would different training data lead to the same error? I could imagine it would lead to something with the same type of flaw, but why do the same exact adversarial images work, out of the near infinite number of possible images? Doesn't intuitively make sense to me, but I can't say I have much of a background in machine learning.

Like if you fit 5 close-to-linear 2d points with a 4th order polynomial, you'll overfit. Change the data slightly and you'll still overfit, but your fit will be very different.


If I understood it correctly, they looked at the activation of the individual high-level neurons and edited the image such that the high-level activation values changed a lot while the image was still identical to the human eye.


IIRC even the human brain has the 'adversarial' image flaw (these images will be unique to each person), but one simple workaround is to alter the input image via eye movement (which happens unconsciously).


IIRC? Can you provide any source or example? This sounds very interesting.


There was a discussion on slashdot (take it with a grain of salt perhaps) about a similar article.

http://slashdot.org/story/14/05/27/1326219/the-flaw-lurking-...

The comment I recalled was written by someone with the handle "presidenteloco".


Not an example from deep learning, but [1] also demonstrates that Bayesian systems also have similar problems with sensitivity to initial conditions that are quite similar.

It is also rather striking that these DLNs seem to be tricked by what we would typically think of as noise.

1. http://arxiv.org/abs/1308.6306


It is not so striking when you consider how the typical published "result" in deep learning is obtained -- spend a few months turning all the different knobs that these models have to offer (while possibly inventing a few new ones) until its performance meets or beats the current state of the art on MNIST, CIFAR, and related benchmarks. Which is to say, these models heavily overfit to a few datasets; it should not come as a big surprise when they generalize poorly. What happens now will be that this perturbed dataset will be added to the standard training corpus and the DLNs will become robust to this effect. Then someone will figure out new way to mess them up, and the process will repeat.


I don't know what papers you are reading, but you seem to have a very distorted view of the literature. How do results on image net, production speech recognition datasets, language modeling, and high resolution satellite images fit the pattern you allege? Once again HN comment quality is depressingly low when it comes to machine learning topics. But I guess to be fair the description in the original link is very confused as well and misunderstands the conclusions of the paper.


Would you classify this as progress?


I thought the Owhadi et al. paper was about model mispecification; i.e., the true model is not in the hypothesis space. That's pretty fundamentally different--and far less of a problem--than gradient descent's "sensitivity to initial conditions".


This problem was observed 20+ years ago with linear models used for protein structure prediction. For any given model of what described a properly folded protein, one could locate conformations of the same protein that were rated as folded even better than the correct conformation (I called them doppelgangers, but the name "decoy" is what caught on).

The statistical naivete of the field led to all sorts of inadvertent mixing of training and test set data which generated a lot of spurious claims for solving the problem. That is until one attempted to find those decoys and they were always found. This led to the creation of the CASP competition to weed this out and the field finally moved forward.

http://en.wikipedia.org/wiki/CASP

The key similarity to what I described above is that adversarial search is done posterior to the training of the deep neural network. That makes all the difference in the world IMO. These adversaries may just be strange, otherwise hard to reach bad neighborhoods in image space without using a roadmap. Or they may be an unvaoidable consequence of the curse of dimensionality.

http://en.wikipedia.org/wiki/Curse_of_dimensionality

But given that neural networks have a gradient, it doesn't shock me that it can serve as a roadmap to locate a set of correlated but seemingly minor changes to an example in order to flip its classification. Doing so is simply back-propagation with constant weight values to propagate the gradient to the input data itself - literally a couple lines of code.

IMO there are two interesting experiments to do next (not that anyone will take this seriously I expect, but ya know, hear me now, believe me later):

1. Characterize the statistical nature of the changes in input images and then use those summary statistics as the basis of an image altering algorithm to see if that can be used to flip the classification of any image on its own. If it can, be afraid, your driverless car may have blind spots. If not, then this is probably just a narrower form of overfitting.

2. If it's likely overfitting, attempt an expectation maximization-like fix to the problem. Train the network. Generate adversaries, Add them to the training set, train again and then lather rinse repeat until either the network can't be trained or the problem goes away.

Expensive? Yes. But you're Google/Facebook/Microsoft and you have lots of GPUs. No excuses...

Failing that, the above is on my todo list so I'm throwing it out there to see if anyone can poke holes in the approach.


Thanks for saying this. I can't really comment on your experiments (I;m not qualified) but you can be assured that some people are working in machine learning today specifically having learnt the lessons of pre- and post-CASP. I don't know that I agree CASP was founded specifically because people found decoys, but...

it was an special shock when I learned about ensemble methods (I think they were just called "combined servers" at the time) at CASP and saw that all our hard work (manual alignments, lots of expert analysis of models, etc) wasn't really better (far worse in fact) than a few simply trained ensemble systems that memorized what they were bad at and classified their predictions with the appropriate probabilities.

See also: http://www.nature.com/nchem/journal/v6/n1/nchem.1821/metrics... http://googleresearch.blogspot.com/2012/12/millions-of-core-... (note, 4 of the 6 projects awarded specifically involved physical modelling of proteins and the fifth was a drug-protein binding job) http://research.google.com/archive/large_deep_networks_nips2...

none of the above are coincidental: the first two links are specifically because I went to Google to use those GPUs and CPUs for protein folding and design and drug discovery. The third project is now something I am experimenting with.


> don't know that I agree CASP was founded specifically because people found decoys, but...

Here's an example of what drove my work back then:

Look at the energies and RMSDs (a measure of distance from the native structure) of melittin in these two papers:

Table 2 in http://onlinelibrary.wiley.com/doi/10.1002/pro.5560020508/pd...

and

Table 1 in http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1260499/pdf/biop...

In the first paper, the energy is higher, but the RMSD is lower. In the second paper, the RMSD is higher, but the energy is lower. How did this happen?

Well, in the first paper, phi/psi angles are set directly from a library of sequentially homologous dipeptides to pentapeptides that INCLUDES MELITTIN. So, by the time you get to tripeptides, you're nearly guaranteed to just be outputting the native conformation phi/psi angles over and over again. And this paper is just one of many to make basic mistakes like this.

As young turk back then, I got into a rather long and vigorous online argument with one of the founders of CASP who insisted the first paper was a partial solution to the protein folding problem. And I suspect that argument influenced the subsequent creation of CASP.

Anyway, it's been nice rehashing my post-doc glory days(tm), but we no longer have any excuses here. We have the tools, we have the technology...


> If it's likely overfitting, attempt an expectation maximization-like fix to the problem. Train the network. Generate adversaries, Add them to the training set, train again and then lather rinse repeat until either the network can't be trained or the problem goes away.

As I quoted in my other comment, the paper suggests doing exactly that.


I'm knowingly being a pedant and I apologize for that but they don't quite say that, rather they come awfully close to doing so. And I'm being a pedant because of all the low information sorts claiming this is a refutation of deep neural networks (it's not, well at least not yet).

"The above observations suggest that adversarial examples are somewhat universal and not just the results of overfitting to a particular model or to the specific selection of the training set. They also suggest that back-feeding adversarial examples to training might improve generalization of the resulting models."

20 years ago I did this for linear models for protein energetics (also known as knowledge-based potentials or force fields), adding the decoys then refitting the parameters ad nauseum. What I eventually arrived at was the invalidation of every single energy model and force field in use for protein energetics (yes I really reverse engineered just about everyone from Michael Levitt to George Rose to AMBER, CHARMM, and ECEPP). This was an unpublishable result according to my post-doc adviser at the time so it never got written up.

In retrospect, he was utterly wrong. So I am really curious what would happen here if this were attempted with these much more complex models.


So, the other interesting paper which I failed to cite was this one: http://www.ncbi.nlm.nih.gov/pubmed/24265211 in which we showed that rosetta needed to include bond angle terms to accurately model some proteins.

That said, I'm a bit surprised you found what you did about AMBER (and other force fields), or rather, that you didn't publish. The cornell et al force field was later acknowledged to have serious problems with protein folding, but a number of improvements have been made since then.

Anyway, I would have happily published that result with you (I worked with Kollman, have worked with Baker and Pande, and desperately want to see the force fields improve using machine learning). There was a guy at BMS who was working on this back in the day ('99-2000) who was using ML and the AMBER folks trashed him because they believed the force field's transferrability from small molecules to proteins was valid (in many ways it was, but it got some key details wrong).

Ifd you think there is a straightforward machine learning for force field problem that can dramatically improve ab initio folding with distbelief and exacycle, let me know. It shouldn't be hard to figure out my email address if you look at the papers I cited and do some basic set operations :-)


I talked to one of the authors of this paper at ICLR and he said that it wasn't really worth the time to compute the adversaries and train, though it did improve results. He said that in the time it took to generate adversaries and then train on them, the net was better off just training on more image data, since there is a near infinite supply of it. Perhaps if you didn't have an infinite dataset, this wouldn't apply.

Also, the really interesting thing was that adversaries generated for one network topology/data set were still adversarial even for other network topologys/data sets, which might imply that the nature of the adversaries is universal rather than highly specific to that exact network trained


Can you point to any literature on 2?


I think just like any machine learning algorithm, especially with computer vision, you need to prepare things properly. More robust data and images such as moving window on image patches and image rotations, even going as far as applying filters like sobel for edge detection will make your algorithm a lot better.

Any algorithms have weaknesses. It's a matter of being aware of them and compensating for them in your model, possibly by using another model.


The problem raised in the paper is that most of the time, you can safely assume that if you have a new data point that is very close to a bunch of correctly-classified data points and far away from points of different classes, the new data point will be correctly classified. In other words, you assume the classification probability is locally smooth.

The problem is that the adversarial examples they are able to come up with are very close to the original images, so this means the smoothness assumption seem to be invalid for deep learning models. As they put it in the paper : "Our main result is that for deep neural networks, the smoothness assumption that underlies many kernel methods does not hold."

It's going to be interesting to see what happen when other researchers try to replicate the results for other models and datasets.


What if they introduced smoothness artifically? You can easily do that by averaging with a certain window function the decision value.

To accomplish that, for example, they could take perturbations of the input and then e.g. take a majority majority vote.


The value proposition of deep learning is to eliminate these kind of hand-coded features and to discover the features automagically.

However, maybe there's a middle ground. I.e. maybe we don't need the more esoteric features that SIFT uses, but it just makes sense to do edge detection, and say a Fourier transform for audio.


The true value proposition of deep learning is not to avoid hand-coded features, but to make better use of scale in data and computational resources.

More specifically, adding SIFT or edge detection to your raw pixel input will almost always strictly improve a deep model's performance (though they might be redundant) at a not-particularly-large computational cost.

It wouldn't solve the adversarial example problem though, except to the extent that it makes calculating gradients harder.


I wrote [1], I'm plenty aware of the "feature discovery" that goes on, but it can still be an enhancer. See my recent talk[2] for a good overall idea of the situation.

Normalization and other data transforms are still required for discovery of features.

[1]: http://deeplearning4j.org/

[2]: https://www.youtube.com/watch?v=hykoKDl1AtE


That library looks like garbage. The website describes convnets as a "type of restricted Boltzmann machine." How can anyone trust a library with that level of misinformation?


I appreciate the feedback. I need to clarify the implementation in DL4j in the documentation.

The impl is a convolutional RBM. That being said, where's your deep learning library? ;)

The documentation is still being worked on. If that's the only thing you can cherry pick from a new project, I must not be doing too bad.

Edit: Clarified. I know you're just a troll account, but I'll throw this out there anyways, I think anyone who judges the quality of a lib based on a 2 second reading isn't qualified to judge much. A lot of it does rely on me with the documentation, but I'd love to chat with you one on one if you think I'm not qualified. Trash me all day, I'll either learn something or embarrass you. Both are fine with me ;).


A lot of people like Torch. I haven't looked at pylearn2 in a while, but that might be good too. Then there are a few researcher libraries with limited documentation. Nitish Srivastava has his DeepNet library and George Dahl has gdbn and there are certainly a few others too. Who knows, maybe people will start contributing documentation.

I would probably recommend Torch at this point. The incentives don't exist for the experts to make really good open source projects and spend all the time required maintaining them and helping people with them.


That's fine. I do this full time. Despite it being new, I'm coming at it from a stand point of providing a platform for newer users and apps around it. You would be surprised the demand for industry.

You're right about this which is why I started a company around it.

I've already talked with andrew ng and yoshua bengio. My incentives are different from there's, however, I do have their blessings to continue doing this.

I walked in to this expecting skeptics. That being said, I love deep learning as a field and will be implementing every possible neural net I can. Since my incentives are different, I can explore the different use cases with customers and help further the field in different directions that might not make sense for say baidu, facebook ,or google.

[1] http://www.skymind.io/


The value proposition of deep learning is to eliminate these kind of hand-coded features and to discover the features automagically.

Yes and just to clarify - this seems like an indication that however useful, deep learning can't follow through with that promise. And I think the ability to do this the key thing - all the approaches work at some level but without this "automagicity", each become hostage to brow-wrinkling experts who become the only one to understand the black-magic of algorithm tuning.


But then we wouldn't see faces in clouds...

Neutral networks are not perfect solutions, they are solutions that get an organism to reproduce successfully.

Read any book on color vision humans have similar problems, yet for the most part we see things, and realize that clouds are just clouds and not faces, except for the religious, they lose their shit when faces appear in clouds.


This is old news. And not really that shocking. You always use multiple models and check them against each other. None is perfect. Big deal.


This actually kind of freaks me out. Might it be possible that there is a way to corrupt brains?


Sure, the human brain is not infallible either. It's actually quite similar to how optical illusions are created: by using what we know about the visual system to create 'adversarial' inputs that create strange results.


We are well aware of various optical illusions, but are there "illusions" that be applied against other brain structures, such as memory or beliefs?


For memory, there's the "Lost in the mall" technique for implanting false memories. It exploits an effect known as memory conformity where people's memories of an event tend to converge after discussing it together.

http://en.wikipedia.org/wiki/Lost_in_the_mall_technique

http://en.wikipedia.org/wiki/Memory_conformity

Not sure what you mean by beliefs.

There was also an experiment on split-brain patients (the connection between the left and right hemispheres is severed) where they'd show a command like "WALK" to the patient's right hemisphere only. They'd get up and walk. But since language is often localized to the left hemisphere, if you talk to them you are talking to the left hemisphere only, which did not see the command. Instead of saying something like "I don't know", they would make up a plausible reason to get up like "I'm getting a drink".

Does that count? It exploits what we know about the visual system (half of the visual field goes to each hemisphere) and localization of a particular function (language) in a patient with a specific disability (their corpus collosum is severed, which is mostly asymptomatic) to produce a completely strange result (invention of a motive).


There are various haptic illusions (http://en.wikipedia.org/wiki/Haptic_Illusion)

In audio, we have the Shepard tone (http://en.wikipedia.org/wiki/Shepard_tone) as an auditory illusion.


There are experiments that can create a false memory.

Not sure if you consider that "corruption".

http://psych.hanover.edu/JavaTest/CLE/Cognition/Cognition/fa...


Can you fix it by adding random gaussian noise to the input?


I don't know if this applies to other classifying algorithms, but I guess this will mean better CAPTCHAs?


I wonder if subtle make up could be used to make a real life face into an adversarial example.


You can apply facial makeup that makes your face look completely different to a human observer. Trying to understand how a person with full-coverage facial makeup really looks like is a very frustrating experience. You're trying to see the contours of the face, but your brain keeps focusing on the applied colors. The same makeup would probably work well on a computer as well.


Yes, actually http://cvdazzle.com/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: