Deep-pwning is modularized into a number of parts to attenuate code repetition. On account of the massively other nature of attainable classification duties, the present iteration of the code is optimized for classifying pictures and words (the usage of phrase vectors).
Those are the code modules that make up the present iteration of Deep-pwning:
The drivers are the primary execution level of the code. That is the place you’ll be able to tie the other modules and parts in combination, and the place you’ll be able to inject extra customizations into the opposed technology processes.
That is the place the true device finding out style implementations are situated. As an example, the supplied
lenet5style definition is situated within the
style()serve as witihn
lenet5.py. It defines the community as the next:
-> Convolutional Layer 1
-> Max Pooling Layer 1
-> Convolutional Layer 2
-> Max Pooling Layer 2
-> Dropout Layer
-> Softmax Layer
LeCun et al. LeNet-5 Convolutional Neural Community
- Adverse (advgen)
This module comprises the code that generates opposed output for the fashions. The
run()serve as outlined in every of those
advgencategories takes in an
input_dict, that comprises a number of predefined tensor operations for the device finding out style outlined in Tensorflow. If the style that you’re producing the opposed pattern for is understood, the variables within the enter dict must be primarily based off that style definition. Else, if the style is unknown, (black field technology) a exchange style must be used/applied, and that style definition must be used. Variables that wish to be handed in are the enter tensor placeholder variables and labels (incessantly refered to as
x-> enter and
y_-> labels), the style output (incessantly refered to as
y_conv), and the true take a look at information and labels that the opposed pictures shall be primarily based off of.
Miscellaneous utilities that do not belong anyplace else. Those come with helper purposes to learn information, care for Tensorflow queue inputs and so on.
Those are the useful resource directories related to the applying:
Tensorflow lets you load a partly skilled style to renew coaching, or load an absolutely skilled style into the applying for analysis or acting different operations. These kind of stored ‘checkpoints’ are saved on this useful resource listing.
This listing shops all of the enter information in no matter structure that the driving force utility takes in.
That is the output listing for all utility output, together with opposed pictures which might be generated.
Please apply the instructions to put in tensorflow discovered right here https://www.tensorflow.org/variations/r0.8/get_started/os_setup.html which is able to help you pick out the tensorflow binary to put in.
$ pip set up -r necessities.txt
Execution Instance (with the MNIST driving force)
To revive from a in the past skilled checkpoint. (configuration in config/mnist.conf)
$ cd dpwn
$ python mnist_driver.py --restore_checkpoint
To coach from scratch. (be aware that any earlier checkpoint(s) situated within the folder specified within the configuration shall be overwritten)
$ cd dpwn
$ python mnist_driver.py
- Put in force saliency graph way of producing opposed samples
protectionmodule to the undertaking for examples of a few defenses proposed in literature
- Improve to Tensorflow 0.9.0
- Upload beef up for the usage of pretrained word2vec style in
sentiment driving force
- Upload SVM & Logistic Regression beef up in
fashions(+ instance that makes use of them)
- Upload non-symbol and non-word classifier instance
- Upload multi-GPU coaching beef up for sooner coaching speeds
Be aware that dpwn calls for Tensorflow 0.8.0. Tensorflow 0.9.0 introduces some
(borrowed from the fantastic Requests repository through kennethreitz)
- Test for open problems or open a recent factor to start out a dialogue round a function concept or a computer virus.
- Fork the repository on GitHub to start out making your adjustments to the grasp department (or department off of it).
- Write a take a look at which presentations that the computer virus used to be mounted or that the function works as anticipated.
- Ship a pull request and insect the maintainer till it will get merged and printed. 🙂 Be sure you upload your self to
There may be such a lot spectacular paintings from such a lot of device finding out and safety researchers that without delay or not directly contributed to this undertaking, and impressed this framework. That is an inconclusive listing of assets that used to be used or referenced in one means or some other:
- Szegedy et al. Intriguing homes of neural networks
- Papernot et al. The Obstacles of Deep Learning in Adverse Settings
- Papernot et al. Sensible Black-Field Assaults in opposition to Deep Learning Programs the usage of Adverse Examples
- Goodfellow et al. Explaining and Harnessing Adverse Examples
- Papernot et al. Transferability in Machine Learning: from Phenomena to Black-Field Assaults the usage of Adverse Samples
- Grosse et al. Adverse Perturbations In opposition to Deep Neural Networks for Malware Classification
- Nguyen et al. Deep Neural Networks are Simply Fooled: Top Self belief Predictions for Unrecognizable Pictures
- Xu et al. Mechanically Evading Classifiers: A Case Find out about on PDF Malware Classifiers
- Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers
- Biggio et al. Give a boost to Vector Machines Underneath Adverse Label Noise
- Biggio et al. Poisoning Assaults in opposition to Give a boost to Vector Machines
- Papernot et al. Distillation as a Protection to Adverse Perturbations in opposition to Deep Neural Networks
- Ororbia II et al. Unifying Adverse Coaching Algorithms with Versatile Deep Information Gradient Regularization
- Jin et al. Tough Convolutional Neural Networks beneath Adverse Noise
- Pang et al. Seeing stars: Exploiting magnificence relationships for sentiment categorization with appreciate to ranking scales
- Goodfellow et al. Deep Learning Adverse Examples – Clarifying Misconceptions
- WildML Imposing a CNN for Textual content Classification in Tensorflow
- Krizhevsky et al. The CIFAR-10 dataset
- LeCun et al. THE MNIST DATABASE of handwritten digits
- Pang et al. Film Overview Information (v2.0 from Rotten Tomatoes)