# Dialog Acts

*This post is also available on the Toucan AI Blog. It is Part 1 in a series on the development of our new Dialog Act Recognition model.*

*This post is also available on the Toucan AI Blog. It is Part 1 in a series on the development of our new Dialog Act Recognition model.*

I just finished doing a fun brainteaser concerning the pursuit of pirates, which I thought I could share. Here’s the puzzle itself, in the words of Abhishek Sinha:

This is probably among the coolest theorems I’ve ever seen; the results are so counterintuitive that it takes a while to wrap your head around the implications. The Riemann Series Theorem, AKA the Riemann Rearrangement Theorem, states that for any conditionally convergent series, its terms can be rearranged to make it sum to *any* value. As a remainder, a series is conditionally convergent if it diverges in absolute value, but converges normally. So,

Currently, we seem to be in the middle of a huge shift in UI and UX design. Gone are the days of shiny bezels and glossy buttons, of stitched leather trims and linen backgrounds. Now, clean lines and simple, content-first design is the way to go. Not wanting to be left behind, I decided to give Flat design a try for a little game that I was planning on building. While that particular game didn’t pan out, I did like the UI that I built around it, so I decided to pull it out for use in other puzzle games. Without further ado, I present: Fluzzle!

All right, let’s talk Music Generation. I was reading about Genetic Algorithms recently, and I wondered whether they could be applied to music. Turns out they can, and have been. There’s some pretty cool music that’s been generated through these algorithms. Unfortunately, there isn’t really a good way for computers to tell good music from cacophony, so the fitness check in these algorithms is usually people. Though I wish I had enough knowledge about AI to take a stab at making my computer appreciate music, I’m sadly nowhere close. So I read some more, and settled upon a different approach: Markov Chains. Don’t worry if you have no idea what those are, I didn’t until recently either. Just keep reading…

So recently, I’ve been interested in fractals. For those who don’t know, fractals “are typically self-similar patterns, where self-similar means they are ‘the same from near as from far’” (Wikipedia). If that doesn’t make a whole lot of sense, just hold on for a sec. To get a better understanding of both Fractals and HTML5 WebWorkers, I made a small fractal visualizer. It draws two types of fractals, Mandelbrot and Julia Sets, and allows users to zoom in and absorb all the fractal-ly awesomeness. I’m not going to go in depth explaining these two varieties of fractals, since there is a fairly thorough description in the visualizer itself. So, without further ado, the visualizer:

Over the years, mathematicians have dedicated tremendous amounts of research to conceiving of new and faster methods of approximating Pi. Hundreds of books have been written on this remarkable constant, and on disparate attempts to get another thousand digits. The current, bleeding-edge algorithms are remarkably fast, but also, quite frankly, often mind-boggling. When looking at all of these crazy algorithms, there’s often the risk of losing sight of the forest for the trees. Sometimes, it’s fun to look at a method that you can understand and comprehend intuitively with nothing more than middle-school math, just to get a new, deeper perspective on Pi and what it really means. That’s where the Monte Carlo method of approximating Pi, probably one of my favorites, comes in. A Monte Carlo method is simply any method that uses repeated random simulations to calculate probabilities. Basically, instead of using ‘real’/theoretical statisitcs to calculate the probability of heads on a fair coin, just simulate flipping it a million times and record what percentage is heads/tails. While it may not be as accurate, and is potentially orders of magnitude slower, the Monte Carlo method has the advantage of being extremely easy to comprehend conceptually, and usually also to implement…

This is a simple, easy-to-understand tutorial on doing regression mathematically. For those who found this page accidentally, and are unsure what’s going on, regression is, roughly, a way to find a line/curve that approximates a set of data ( a line of best fit ). This is often necessary when dealing with data from experimentation, since it is unlikely to fit the expected type of line perfectly. Real life isn’t linear!

I just started learning Haskell. So far, this language has consistently blown my mind, at every turn. I feel like every new feature is a revelation. But damn, is it hard. Till now, I’ve kept to trying simple exercises from the book I’m using, Real World Haskell (A great book). Yesterday and today, I tried my first not-totally-trivial program: I made a Naive Bayes Classifier. My classifier is in two parts: A module for the classifier and a command-line interface. The main classifier is only 34 lines, including comments! Anyway,I thought I’d put it online so that anyone can give me any feedback or comments, and if it can help anyone else, all the better. DISCLAIMER: I’m just learning Haskell, so don’t consider this a tutorial, or assume it uses any ‘best practices’, or is even remotely idiomatic…

**TL;DR:** I felt I was missing out on HN’s awesome content. So I made a home page that automatically opens the top link, forcing me to see it. You can use it too; URL at the bottom of the post…