- Different plugins for the "artist", the "pool", and the "fitness" - The fitness module assigns a score to a Bitmap object generated by an artist. The artist produces an image (probably, but not necessarily, created by genes) and the pool manages artists, and decides based on their score what to do with them. By implementing these as plugins, I was able to experiment, very easily with different ideas while maintaining some core functionality that was totally isolated from the component I was working on.
- "Bastardized Annealing"
Simulated Annealing is half implemented by a combination between the "experimental artist" module and the "multi pool" module. While not a strict interpretation of the algorithm, here is how it is approximated: Within the artist, individuals "genes" (a drawing expression, a shape and its description) have a temperature which gradually cools. The temperature, when hot, allows the gene to bounce around a lot, move to more places on the canvas, have greater variety in colour and so forth. Within the gene pool module, there is a function - e(x,b) = sqrt( (1/x) + 1/3(1/b) ) - which determines the probability that a less efficient "solution" (artist) will be chosen, where x is the the absolute difference between the score of the best winner of a given generation and the current "elitist selected" winner, and b is the score of the "elitist" winner. - Extended Primitives
The drawing expressions in the AST that I developed allow for curves, polygons, and outlined shapes. Additionally, the rotation of the expression can be mutated, as can the scale and origin. - Gene Momentum
Individual branches of the AST can remember how they previously mutated. In the future, if that gene is called to mutate by the from the mutation method (mutations are called by the clone and child creation methods, and can be invoked directly by the pool.) it will tend to, though not necessarily, mutate in the same direction it has previously mutated - "Gravy" Colour
In order to improve how the artist assigns colours, I created a colour space which I called gravy-space (because I couldn't think of a better word) that had more colour-family oriented transitions, rather than being very logical like RGB. I considered using HSV but since this is a rather experimental project, I decided to experiment with colour spaces as well. The conversion from gravy colour to rgba is as follows (gravy colour has four components, ABC and D): gravy_r = a / (1-(d*0.999)) / b ^ (c*d) gravy_g = b / (1-(d*0.999)) / c ^ (a*d) gravy_b = c / (1-(d*0.999)) / a ^ (b*d) gravy_a = d The 0.999 is obviously a repeated nine. It is approximated for the sake of appeasing the computer.
Genetic Artwork
Thought I'd share what a long-time forum lurker has produced. This is a highly modified version of Roger Alsing's Method of art produced with a genetic algorithm. Alsing's exact execution may not be totally considered "GA", which inspired me to work on a more GA type implementation.
The screenshots below highlight a variety of stages in implementation, but at "current spec" here's the major differences between my implementation and Alsing's.
Some more images:
Old version, hosting the NN plugin.
Lots of images :)
This shows the changemap that the fitness algo creates... it processes the source image, and then calculates the changes in RGB values for each pixel. This creates a coefficient that the error for that pixel is multiplied by. This causes solutions which duplicate the high contrast areas of an image to be favored for reproduction over solutions which duplicate low contrast areas; although ultimately, the only way to get a score of "0" (fittest) is to duplicate every pixel. This is calculated with the sum of the euclidean distance between the produced RGB and the source RGB for each pixel, multiplied by the amount of contrast per pixel.
[Edited by - djz on November 11, 2009 4:17:58 PM]
Old version, hosting the NN plugin.
Lots of images :)
This shows the changemap that the fitness algo creates... it processes the source image, and then calculates the changes in RGB values for each pixel. This creates a coefficient that the error for that pixel is multiplied by. This causes solutions which duplicate the high contrast areas of an image to be favored for reproduction over solutions which duplicate low contrast areas; although ultimately, the only way to get a score of "0" (fittest) is to duplicate every pixel. This is calculated with the sum of the euclidean distance between the produced RGB and the source RGB for each pixel, multiplied by the amount of contrast per pixel.
[Edited by - djz on November 11, 2009 4:17:58 PM]
Could you tell us in English (erm... layman's terms) WTF this is?
laziness is the foundation of efficiency | www.AdrianWalker.info | Adventures in Game Production | @zer0wolf - Twitter
As far as I can tell: Using AI methods to "teach" a program what to draw, hopefully leading to results of artistic quality.
Some of those pics do look cool.
Some of those pics do look cool.
Quote: Original post by zer0wolf
Could you tell us in English (erm... layman's terms) WTF this is?
A vector representation of a drawing (not a specific drawing, just a drawing, nothing more than a scribble) is generated. It is scored by some fitness algorithm. (in this case, the summed euclidean distance between the RGB values of all pixels in the generated drawing versus the source image) Drawings that are close have a high chance of reproducing and making more drawings. These child drawings will mutate (and tend to mutate as they have previously) and can expand. The number of points in an expression can increase or decrease, the number of drawing elements in the drawing can increase, decrease. Colour values can change, curve tension, etc.
Eventually it converges on something that satisfies the fitness function. In other words, it draws a picture through trial and error by "looking" at a source image. It's not really AI so much as GA influenced image-based rendering. I am currently researching face detection so that it will not be limited to re-generating a source image, but could also generate its own drawing that, according to the NN performing fitness, qualifies as a face - or any other object you train it to detect in an image. That's the dream, anyway.
Try putting The Scream through it. I'd be interested to see what it comes out with.
looks very interesting, I like the Carl Sagan picture as it is!
The eventual goal sounds very cool, but you'd have to put in a large amount of similar head shots. To train it on faces, meaby you can put it online and ask visitors for a webcam mugshot. This should result in a good bunch of similarly positioned faces. It would be awesome to have the learning process output as a movie.
I always wondered what would happen if random learning algorithms were put online for long periods of time and allowed to learn by visitors interacting with it.
I would be very interested to see what comes out of something like that.
The eventual goal sounds very cool, but you'd have to put in a large amount of similar head shots. To train it on faces, meaby you can put it online and ask visitors for a webcam mugshot. This should result in a good bunch of similarly positioned faces. It would be awesome to have the learning process output as a movie.
I always wondered what would happen if random learning algorithms were put online for long periods of time and allowed to learn by visitors interacting with it.
I would be very interested to see what comes out of something like that.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement