My daughter is particular about manners. If you don’t say please or thank you when you’re supposed, it’s a serious problem. This is starting to extend to the Alexa device she uses to listen to audiobooks. She says please and thank you after the device follows her instructions. Numerous times she has corrected me for not using similar pleasantries.
The world in which my daughter and children her age are growing up in is full or artificial intelligence. Communicating with a smart device, recommended searches, GPS directions, and more capabilities we have involving artificial intelligence are normal parts of life for them.
So, how do we ensure our children have an awareness of artificial intelligence rather than thinking Alexa just automagically talks to them?
I recently ran across the AI for K-12 Initiative, an effort to map out a curriculum for primary and secondary school-age children to learn about artificial intelligence. The site has a ton of resources that I think will be helpful for parents and educators to take a look through.
Imagining the world my daughter will be living in 30 years from now is pretty overwhelming at times. Resources like this help me do what I can now to help ensure she’s at the table shaping that future. I’d love for your kids to be at that table as well.
Over 50 years ago, Gordon Moore, one of the founders of Intel observed that the number of transistors computer chip makers were able to fit on a chip doubled every two years. Moore’s Law held true until the past couple years as the amount of power these chips need to compute becomes unsustainable and their computing power starts to level off. So, software researchers and hardware researchers have been thinking of new ways of increasing computing power that is more energy efficient. As someone who lives to see a good connection, I was thrilled to come across an opportunity for these two efforts to meet.
Deloitte’s Applied AI Leader, Melissa Smith, posted the below tweet that got me thinking about this.
When I was a kid, my mom would let me look at bacteria using the electron microscope in her laboratory. Shoutout to North Carolina A&T State University for investing in its laboratory capabilities. It was incredible to see extremely small organisms with such clarity. But, I could only observe these organisms in isolation. I couldn’t see how they interacted with organisms in the past, let alone predict how they would depending in changes in the environment. Things have changed over 20 years later. Electron microscopes allow researchers to examine nanoparticles with incredible accuracy.
There’s a whole field of science called quantum physics that describes the fundamental elements of the universe – photons, electrons, etc. The paper Melissa linked to is part of an effort to equip researchers with the tools to simulate and study the interaction of lots of particles and make predictions about phenomena like gravity.
The researchers figured out a way to use neural networks, basically algorithms designed to try and mimic how our brains identify patterns, to simulate the interaction of a bunch of particles, or quantum systems. The researchers did this because of the limits we are hitting in computing power.
Kunle Olukotun is a professor at Stanford who changed the game for computing in the 90s and early 00s. Back then, processing speeds were leveling off similar to what is happening today. He introduced the multi-core processor which kickstarted another wave of speed increases. Today, Kunle and his team at SambaNova Systems is building new software language and hardware flexible enough for artificial intelligence applications to run at scale. The talk below gives a nice overview of what the SambaNova team is building.
SambaNova hasn’t released its products yet, so for the time being the neural network solution the researchers came up with works. It will be really cool to see the impact a software and architecture able to handle quantum systems simulations has on their research. That convergence could be really powerful. I, for one, would love to see how much has changed from when I would ask my mom if I could look at something in the electron microscope.
AWS Announces General Availability of Amazon Textract (Amazon)
Amazon is making widely available a text and data extraction tool that is going to make it real easy to search all kinds of information. My whole time reading this release all I could think about was how hard it is to search your own posts on Twitter. There’s no excuses now, Jack Dorsey.
Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age (Freedom to Tinker)
The incorporation of artificial intelligence in welfare programs comes with a range of risks. The last section of this piece is critical, taking a human centered design approach with a focus on those most at the mercy of artificial intelligence technology operating portions of welfare programs. That kind of approach would be helpful in avoiding unintended consequences over the course of developing AI tools for these populations.
Niti Aayog plans index to rank states on artificial intelligence adoption (Economic Times)
An index rating the readiness of states across India to adopt artificial intelligence technologies will be very interesting to read. Considering the U.S. has states still struggling with voting machine technology, this sort of index would be very eye-opening for policy makers in this country.
There’s been a lot of conversation around the importance of making sure there’s diversity among developers and researchers who are building artificial intelligence. One of the primary concerns is that this diversity imbalance will lead to bias being built into algorithms. The proposed remedy is for teams to hire more minorities on their teams.
I’m a strong proponent for diversifying the makeup of those building artificial intelligence. But, what does one do if the minorities one hired to join a team continue proliferating bias in their code, bias that hurts them?
I recently heard a story about a group of minority developers building AI algorithms that wouldn’t have recognized their skin color. I don’t think this is something that has gotten much of any attention, but it’s something worth examining. Just because someone is a minority doesn’t mean they haven’t digested tropes about white people and those of color. Are these folks in the right mental space to think independently and proudly? Do they see color or pretend not to?
I’m going to be pondering this more, but wanted to put the thought out there. What frameworks do have in your backpocket for thinking through issues like this?
Heinrich, Portman, Schatz Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development (Martin Heinrich)
I wrote here about Congressional efforts to figure out the U.S. artificial intelligence direction. That effort plus this bill by a group of senators helps move us closer to the U.S. having an AI strategy. That said, $2.2B is not nearly enough investment. Just consider salaries for AI researchers – Google’s Deepmind lab paid $138M in salaries to 400 employees back in 2016.
IDC: Asia-Pacific spending on AI systems will reach $5.5 billion this year, up 80% from 2018 (TechCrunch)
This shows what the US is up against when it comes to investing in AI research and development.
Artificial intelligence becomes life-long learner with new framework (Science Daily)
As the son of a Wolfpack alumna, I had to include this. Further, as one with a long memory, I don’t know how I feel about AI software getting back at remembering previous tasks.