[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /λ/ - 17131

File: 1466220584294.png (2.77 MB, 300x300, cover.jpg)


We definitely ought to have this thread, since AI is about as future as it gets. If you're not coding artificial intelligence, your replacement will be.

Hacker's Guide to Neutral Networks: http://karpathy.github.io/neuralnets/


(if anyone else has materials, please do contribute. i'm limited by my phone at the moment.)


File: 1466221333748.png (37.24 KB, 200x136, Gneural_nertwork.png)

I can't let GNU be ignored in this: https://www.gnu.org/software/gneuralnetwork/

I'd wager it will eventually be controllable from GNU Guile. That would be neat.


I started getting into learning about machine learning after seeing the MarI/O video on youtube. Worth watching if you haven't seen it.



Learning AI subject at uni using this textbook. Looked at the first 5 or 6 chapters.

http://www.cin.ufpe.br/~tfl2/artificial-intelligence-modern-approach.9780131038059.25368.pdf [46mb]

We did a path finding assignment with UCS, BFS and A* search, another practical with decision tree learning (lots of data), and a likelihood weighted sampling practical - small amount of data and lots of samples.


I wonder how serious lainons are about ML.


Lots of data sets to play with. Some of them are effectively uncrackable with current techniques.


Sklearn gets you around manual implementations of most algorithms. Not that making them yourself is a bad thing (it's actually very interesting and educational) but you're 1) going to spend a lot of time doing it and 2) not going to make a better wheel, so to speak.

Does anyone have any ongoing ML projects?


>that wordfilter


In my holidays I'm planning on looking into the stock market. Lots of numerical data is available. I'm thinking looking at the data could be useful but not enough to predict the (distant) future. Scanning the media/twitter for keywords could work, particularly scaling the influence of news/stock market accounts a bit more than general users.

Both of these might only give insight until it's tested and optimized. Not sure if it's possible to get long term trends from this data or if short term trends are all I could get.

What are your thoughts?


If that was really possible and someone like us was able to do it, the big players would be able to hire someone who could do it and everyone would be doing it by now.
If that was the case, you'd probably hear a lot more about technology that does this.


Yeah that's what I thought too, but I decided it would be worth a try anyway. And if there's some random coder with some working solutions, maybe they wouldn't be telling the world about it.

But I read about college researchers doing analysis with twitter and had better returns than popular investment schemes (when the market was dropping, tests weren't released when market was rising). Can't find the article but this is a similar one


Partly keen on this idea because it's a good coding project and recently I got really interested in the stock market. Bonus is if it gives good returns.


I've barely started work on a personal thing i want to do that'll need to integrate Natural Language Processing. I started looking into the nltk for Python before realizing I'll really need to do some research first.


I've actually been working on it on and off for a couple years, funny enough.

In my experience, predictions of up to a week or so can be accurate enough to be useful but beyond that it gets very hard to keep a classifier inside any meaningful confidence interval. Predictions of one trading day are quite accurate but hard to use in any practical way.

Scraping and parsing news/twitter can help some but it's extremely noisy. I only use it to rule stocks out because, in my opinion, good businesses don't need to make waves to make steady profits. Trying to use news to actively purchase puts you in the same boat as people who are trying to guess the market; don't compete with them, they're just going to fail.

One thing I want to try in the near future is using news to feed a semantic web parser. The semantic web is essentially a large, heavily interconnected network of names and their relationships, like businesses, their key members, where they are, etc. With a mix of semantic lookup/parsing, network traversal, and an enormous heap of patience it should be possible to connect news events to stocks through more distant tangents. This is still, of course, focused on eliminating potentially risky purchases, but it's much more likely to put together that it was the wife of Oracle's CEO that got that DUI, for (fictitious) example, and thus preemptively rule out purchasing that stock.


Good to hear from someone with experience. Thanks for the info. With only up to a week to deal with at a time, due to the extra charge when buying shares, it's possible to come out on a loss even if there is a slight profit (unless there's a no brokerage fee option).

Something like ruling out a stock would be useful but I would have thought a lot of investors are naturally good at doing that without a program.

I would feel most comfortable just using data at first, for things like finding trends in the sell price/volume, trends of the same industry group (mining, energy) as well as the rest of the stock exchange). It should be easier to test it against real data and actually see if it predicted the correct output. But it sounds like it's not guaranteed to give good info all the time.


I've gone over this problem in my head ad nausem. Basically I came to the conclusion that it's impossible make consistent, long-term gains in the market using algorithms because as soon as a particular model appears to be successful, it will be adopted by a relatively large number of big players who all found the same thing using similar approaches. After you start losing money, the algorithm "evolves" and starts making money again, but the same thing happens, and the cycle continues on forever.



do you really take seriously dailymail articles?


What's wrong with this particular article? I think it does a decent job of covering some of the intricacies of automated trading without overwhelming its readers. Don't dismiss an article just on its source.

If you want to just use stock data then scrape Yahoo Finance. It's free and the data isn't too bad (but beware that 1. it occasionally lists splits as traded values which is kinda fuark'd and 2. it suffers from survivor bias because stocks that are no longer traded aren't supported any longer, so the model you train won't have ever seen a stock 'die').

>Something like ruling out a stock would be useful but I would have thought a lot of investors are naturally good at doing that without a program.

Ah, but the goal here is for your 'trader' to pick a few good stocks every prediction cycle. Anything which bumps a potentially risky ticker from the list of good candidates (to continue the Oracle contrapositive: the CEO is about to go missing for a couple of business days while he resolves the mess his wife made, investor confidence in the firm falters because he's disappeared and stock value drops by 5% over 3 days) is a significant advantage for the system; it helps the trader spot a ticker that will display anomalous results from seeming promising data.

Something to keep in mind: the stock market always grows in the long run. Roughly 25% of stocks grow. All you really need to make money is to always pick one of those (fairly many) stocks that are growing. So don't take risks!


Thanks for the advice, I was going to use Yahoo data so I'll keep that in mind. and that's true about picking a risk free stock, using a stock market simulator has helped me find out a couple things that should shout out as an obvious no-no. In the middle of my exam period, but when I'm done I'll make a start.


Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems
>What’s stopping AI from being put to productive use in thousands of businesses around the world isn’t some new learning algorithm. It’s not the need for more programmers fluent in the mathematics of stochastic gradient descent and back propagation. It’s not even the need for more accessible software libraries. What’s needed for AI’s wide adoption is an understanding of how to build interfaces that put the power of these systems in the hands of their human users. What’s needed is a new hybrid design discipline, one whose practitioners understand AI systems well enough to know what affordances they offer for interaction and understand humans well enough to know how they might use, misuse, and abuse these affordances.


File: 1467544924278.png (184.62 KB, 200x169, feb770c3eb255061fcddc7fe89f0e4a30d9fdb603e783b399dd0f7986ac3da8b.png)

> Solve Real Human Problems
> businesses


Can someone help me find a guide for machine vision? I want to try creating a simple ai that can recognize features in images, but I'm not sure where to start. Help much appreciated!


You might want to look into Convolutional Deep Belief Networks.
If you are a complete beginner you might want to watch this


Look into OpenCV (as a library or simply for the trained corpus).

To learn, read about the Viola and Jones algorithm, Haar features, and integral image.
I'm doing an internship on the subject this summer so I mya be able to answer specific questions.


The course material seems pretty cutting edge and the lectures are on youtube.


Someday speculation will become so meaningless that the stock market will finally follow more real-data driven evaluation of companies. The stock market will evolve back to what it once was: investment.


>the cycle continues forever



Depending on what the poster is looking to do, I'd recommend HOG (histogram of oriented gradients) for general object detection. I think Viola-Jones is a little older, and more tailored to a particular application (face detection), though please correct me if that's wrong. Before CNNs came around, I think the benchmark for object detection in images was generally along the lines of linear classifier + HOG+SIFT+GIST+color (or just HOG for simplicity), so you shouldn't go too wrong with that.


That was pretty cool. How would I get into making something like this for another emulator. Don't need specific instructions just a poin tin the right direction


I wonder how much of this AI work can be applied to IA.

I'd imagine all of it can, actually. That would give it immediate full application, as well.


I am currently reading http://www.deeplearningbook.org/ for the basics of deep learning and artificial neural networks. I'd like to try these on some computer vision problems. Seems like a new trend to try NNs on all the classical problems because they are much faster than the classical methods.

Any recommended frameworks/libraries to play around with? I thought about using TensorFlow because it seems quite popular at the moment...


Another game, another emulator, another universe, the work is done here:
Keras is by far the most efficient tool when it comes to tinkering and experimenting with nns


File: 1484857489986.png (371.91 KB, 200x80, neuraldoodle.png)

Even if it is a bit late I wanted to say thanks.

I immediately started looking into Keras and started building my infrastructure around it. And I tend to forget everything around me when tinkering on my projects. The fit_generator method of models is especially nice where you pass a python generator which indefinitely loops over the data. Because I do not have the infrastructure for deep learning myself (it is only a private hobby project) I saved my data at a free cloud storage service and use a student computer for training over ssh. Therefore I have to dynamically load the files from the cloud into local memory. A generator is a nice abstraction for all these things because it does not matter if the data is available local or online.

Still have some problems with finding an appropriate model or making is converge… But for these things I simply need to study more theory and run more tests and experiments. Now with a standing infrastructure and a nice prepared dataset I can finally start the fun part. (Although I often get errors and warnings because of low GPU memory. I hope to find a model which is small enough to train efficiently but also has a good performance.)

Again: Thank you!

Oh, and Universe seems cool, to. But atm I am more interested in supervised learning. Reinforcement learning is still too advanced for me. I barely understand the basics of Machine Learning.


I have my revision notes from studying machine learning at university. https://tblah.github.io/ml-revision/

It covers a lot of simpler algorithms with a short introduction to neural networks and the beginnings of an introduction to support vector machines.

The exam I was revising is more theory than programming but I am sure some Lains like maths.

Feel free to contribute and ask me questions.

>inb4 I am made to regret identifying myself


machine learning has always interested me, but I never seem to have the time in my busy schedule(between school, programming, studying other things, and wasting time on IRC) to really get into its guts. Hopefully I can take a class on it and be forced to study it, but until then I'll just have with me the basic 10-minute intro to neural nets that everybody knows.


My experience studying machine learning is that you can do a lot just from knowing those 10 minute introductions and hacking with some libraries. There is a lot of theory between doing this and anything more complex, and even then it is a bit of an art choosing the correct features (inputs) and choosing an appropriately complex model.


Are expert systems dead? Everyone seems to be talking about neural networks while the symbolic approach to AI seems to be neglected today.


neural nets basically do the same thing but sub-symbolically, which turns out to be more effective.


I've heard that neural networks can't explain their output to the user unlike expert systems, is it true? When a neural network fails, it's much harder to diagnose the problem.


I'm not >>22807, but yes that's true. Neural nets for the most part tend to be nothing more than optimization programs, which when given high-dimensional data is able to, through a series of operations on matrices/tensors/vectors combined with some rather involved calculus, output a prediction on the classification of that data.

There's no way (currently) to apply semantic representation to said operations so it's difficult to understand why a model might be doing poorly or extremely well. There are some general metrics used of course, and iirc some groups are trying to turn these into less of a black box. And on the notion of expert systems, I think there was a paper recently where a group trained a series of networks on certain tasks, and then treated those networks as "expert systems" which were utilized by a higher-level network to pass data to them depending on the task at hand. In other words, it's a step towards trying to create a system which can handle multiple, distinct problems rather than a single one.

Hopefully that was clear, I'm still learning these things myself.

Unrelated but is anyone doing research in the field here? I need connections + advice and my university is lagging behind when it comes to this ;_;


yeah but there are some ways of figuring out how they make those connections, by looking at what inputs cause certain neurons to fire. There's some neat programs that do this for you but I forget their names now.