I hate the term "artificial intelligence" because "artificial" implies that it is fake. Unfortunately, if you look at the history of this field, much of the early work was indeed artificial. Researchers would manually program their systems to achieve certain tasks, but the systems did not absorb feedback from their experience to adjust how they would behave in a future situation. They were static systems.
In my mind, "machine learning" is about the "learning". There is nothing artificial about it. The system may not necessarily be learning how humans learn, but it is learning. However, the terms AI and machine learning are still largely used interchangeably.
The field has gone through many twists and turns and several AI winters, but the early promises appear to be finally coming true. I believe there were three important factors necessary to let this happen: true learning systems that used feedback, significant computational horsepower, and lots and lots of data. A short 2.5 page paper in 1986 on back-propagation of errors helped address the first factor, but it wasn't until the recent emergence of the cloud that the last two factors could be addressed. Graphical processing units/engines (GPUs) are also being used to address the computation horsepower factor. It wasn't until about 2012 that these factors started to come together.
The a16z podcast AI, Deep Learning, and Machine Learning: A Primer provides a great 45 minute history and introduction to the field, including a nice demo of Google's TensorFlow at about the 25:45 mark in the video. The book The Master Algorithm covers similar ground but in more depth.