Deep Learning #4

I’ve finished with my first unit on deep learning. For my final day I explored some different mediums other than the Coursera course I had been watching. I read a few chapters of the book Hands-On Machine Learning with Scikit-Learn and TensorFlow, and really enjoyed it. Honestly I found it much easier to follow than the Coursera course, which is the opposite of what I was expecting. The writing is clear and the examples are interesting, which kind of makes me wish I had started with the book instead.

I played with a neural network visualizer to get a better feel for how they worked. I also looked through the Google AI experiments page to see what kinds of things people are building with the technology. Finally, I watched an interview on Coursera with Geoffrey Hinton, one of the pioneers of deep learning. He talked about how the field arose and the future directions he sees it going in.

I found this more wide-ranging approach to be a good method for me. It certainly held my interest better than just watching videos for hours straight. I liked exploring different facets of the community and ways of learning.

 

AI Meetup Event

I attended an event hosted by Galvanize Austin that I found on the Austin Data Science meetup.com site. Galvanize is a for-profit data science and coding school, and the organizers of at least a few different Meetup groups (presumably so they can advertise their services).

The event was a talk entitled “Star Trek Bridge Crew: Artificial Intelligence in Virtual Reality with IBM Watson“. I had trouble finding a strictly deep learning or neural network event, but because those topics are so closely connected to artificial intelligence,  I thought this talk would still be relevant to my interests.

There were many things that surprised me about this event, including:

The People

Because the theme was Star Trek-related I was expecting the audience to be a lot, well, nerdier in a stereotypical way. I was really impressed by how diverse it was. All different genders, races, and ages were well represented.

The Food

The event was from 6 – 8pm, so I thought there might be snacks, but I definitely didn’t expect all the excellent food (spring rolls, mac & cheese bar, etc.) and free beer and wine too! I mean, that alone made it a worthwhile event to go to (starving grad student and all).

The Presenter

The presenter from IBM was much more engaging than I was expecting. I looked at his LinkedIn afterwards and it seems that he’s a pretty seasoned public speaker.

The Space

The event was held downtown at Galvanize Austin. The space was a lot swankier than I was expecting. It definitely had what I think of as a “tech workplace” feel (clean, modern, lots of windows, group workspaces). I’m guessing this is to impress current and potential students and prepare them for their supposed future workplace.

The Content

I had thought the talk would be more focused on promoting the Star Trek game, but it was actually more of a general overview on artificial intelligence and virtual reality. It was interesting to hear from an industry professional on current practices and what they thought the future trends would be.

Final Thoughts

Overall I had a much better time than I thought I would. The space and the free food and drinks made a great first impression, and the talk was more engaging and relevant to my interests than I thought it would be. I also had the opportunity to try out a VR headset for the first time and experience how AI voice commands work in the Star Trek Bridge Crew game. Honestly, I was expecting the VR headsets to be a lot more impressive than they currently are, but it was still really fun to try out. Galvanize promoted some of their future events, but they definitely weren’t too pushy about wanting to sign people up for classes.

I’m planning on attending some of their events during the Cognitive Builder Faire. If anyone else from class is interested, you can use the code GalvanizeVIP to register for any of the tutorials for free and save $20.

Deep Learning #3

I’ve known about Carol Dweck’s fixed/growth mindset theory for a few years now, and I actually do try to apply it when I’m learning new things. The main takeaway I learned from it is to embrace the feeling of difficulty, because that’s when real learning is taking place.

As someone who grew up during the fixed mindset era, I believe my main learning -related challenge was that from a young age I’d always been told that I was “smart”. This became my identity in school and it was something I believed to be an innate part of my personality. Because learning had always come easy to me, I devalued the effort that others had to put in. This mindset was detrimental because whenever I encountered something that was difficult to learn it threatened my very identity. If I had to put in much effort, then, I reasoned, I must not be as smart as I thought. (Related to this is also the slacker’s modus operandi of not trying hard, and therefore having a handy excuse if you fail “well, I could have done it if I’d actually cared about it” etc.)

So, not to psycho-analyze myself too much, I now believe I have a much healthier view of effort and learning. By fostering a growth mindset I’ve been able to approach learning about neural networks quite differently than I would have a few years ago. I find the class I’m taking pretty difficult, but my self-esteem isn’t suffering because I understand that being challenged is part of learning.

I’m fine with understanding the high-level concepts, but my understanding of the particulars of the math is definitely lacking. I know my issue isn’t that I’m “not a math person”, but rather that I haven’t grasped the fundamentals of calculus well enough to fully understand machine learning. If I applied myself and invested the time, I could build my knowledge up to the needed level. My difficulties with this class aren’t a reflection of my intelligence – instead they indicate that I’m pushing myself to learn outside of my comfort zone.

I think that to truly understand neural networks I would need to start by brushing up my math skills. However, my goal here was mainly to development a higher-level understanding of what neural networks are and how they function (not necessarily to build them from scratch) and in that I think I’m succeeding.

 

 

Deep Learning #2

I’ve now completed Week 1 and half of Week 2 of the Coursera class Neural Networks and Deep Learning. This included the introductory videos and videos on logistic regression as a neural network. The professor seems to be assuming that most of the students will have a relatively strong background in statistics, but not as strong of a background in calculus, which works out well for me.

I was kind of intimidated to start this class because I didn’t know how much math knowledge would be expected of the students. The professor has said that it’s okay if your calculus skills aren’t strong – that an intuitive understanding is more important.

So far I feel comfortable with my skill level and what we’re learning in the class. I have to really concentrate to understand it, but it’s not overly difficult. I’ve found that pausing the video and taking handwritten notes helps me to better internalize the information being presented.

Deep Learning #1

On 9/6 I started my first project on Deep Learning by beginning to watch Andrew Ng’s Coursera class on Machine Learning.

I like the Coursera format of video lessons with questions that pop up to check for understanding. It’s also really nice to have the option of watching the videos at 1.25x or 1.5x speed.

I think the main drawback is that it’s hard to take an online course as seriously as you would an in-person course. Interaction with the instructor is usually non-existent, and interaction with other students, while encouraged, usually isn’t necessary to complete the assignments.

Right now I’m not sure how strong my background knowledge of machine learning needs to be in order to start the neural networks course next week. The amount of information can be overwhelming. I hope that my previous knowledge of statistics and the videos I watched on machine learning this week will be enough to build on.