Andreessen Horowitz posted this interesting conversation on cobalt – the mineral helping power our phones, electric vehicles, and more.
The conversation got me thinking about a piece I wrote back in 2015 (time flies!) on Jean-Yves Ollivier, Marc Andreessen, and the common interests they share in minerals that power the global economy.
There has been a lot written about how problematic cobalt mining is because of the extent to which child labor is involved in Democratic Republic of Congo where much of the world’s cobalt is currently produced. Companies like Tesla and Apple are working on improving the sourcing of these minerals.
While the adoption of smartphones is rapidly growing, we’re still in the early stages for electric vehicles. According to Clean Technica, about two percent of vehicles sold last year were electric vehicles to give you a sense of how far there is to go.
So, if the world moves to electric vehicles we could be consuming a lot more cobalt. In the piece I wrote, I link to a BBC piece on a city in Mongolia called Baotou. The city is a hub for the production of some key minerals in smartphones and other complex devices. There’s a lake near the city that is extremely toxic as a result of industrial waste.
We hear a lot about artificial intelligence and are moving towards the technology becoming more and more a part of our lives. Devices will come along with this: cars, sensors, devices connected to our brains, and more. Proponents of artificial intelligence say the technologies could create something of a utopia where we’re able to focus more on caring for others, the arts and more.
My worry is that this supposed utopia would be layered on top of an underworld like Baotou. I had never heard of the place before reading that BBC piece.
Perhaps we’re really moving to the singularity and an artificial intelligence-driven world like Ray Kurzweil says we are. Maybe Elon Musk succeeds in driving the global adoption of electric vehicles. If that’s so, we’ve got to be sure we’re thinking through to the outer edges of the supply chain to ensure we’re treating folks and the environment well.
My daughter is particular about manners. If you don’t say please or thank you when you’re supposed, it’s a serious problem. This is starting to extend to the Alexa device she uses to listen to audiobooks. She says please and thank you after the device follows her instructions. Numerous times she has corrected me for not using similar pleasantries.
The world in which my daughter and children her age are growing up in is full or artificial intelligence. Communicating with a smart device, recommended searches, GPS directions, and more capabilities we have involving artificial intelligence are normal parts of life for them.
So, how do we ensure our children have an awareness of artificial intelligence rather than thinking Alexa just automagically talks to them?
I recently ran across the AI for K-12 Initiative, an effort to map out a curriculum for primary and secondary school-age children to learn about artificial intelligence. The site has a ton of resources that I think will be helpful for parents and educators to take a look through.
Imagining the world my daughter will be living in 30 years from now is pretty overwhelming at times. Resources like this help me do what I can now to help ensure she’s at the table shaping that future. I’d love for your kids to be at that table as well.
Over 50 years ago, Gordon Moore, one of the founders of Intel observed that the number of transistors computer chip makers were able to fit on a chip doubled every two years. Moore’s Law held true until the past couple years as the amount of power these chips need to compute becomes unsustainable and their computing power starts to level off. So, software researchers and hardware researchers have been thinking of new ways of increasing computing power that is more energy efficient. As someone who lives to see a good connection, I was thrilled to come across an opportunity for these two efforts to meet.
Deloitte’s Applied AI Leader, Melissa Smith, posted the below tweet that got me thinking about this.
When I was a kid, my mom would let me look at bacteria using the electron microscope in her laboratory. Shoutout to North Carolina A&T State University for investing in its laboratory capabilities. It was incredible to see extremely small organisms with such clarity. But, I could only observe these organisms in isolation. I couldn’t see how they interacted with organisms in the past, let alone predict how they would depending in changes in the environment. Things have changed over 20 years later. Electron microscopes allow researchers to examine nanoparticles with incredible accuracy.
There’s a whole field of science called quantum physics that describes the fundamental elements of the universe – photons, electrons, etc. The paper Melissa linked to is part of an effort to equip researchers with the tools to simulate and study the interaction of lots of particles and make predictions about phenomena like gravity.
The researchers figured out a way to use neural networks, basically algorithms designed to try and mimic how our brains identify patterns, to simulate the interaction of a bunch of particles, or quantum systems. The researchers did this because of the limits we are hitting in computing power.
Kunle Olukotun is a professor at Stanford who changed the game for computing in the 90s and early 00s. Back then, processing speeds were leveling off similar to what is happening today. He introduced the multi-core processor which kickstarted another wave of speed increases. Today, Kunle and his team at SambaNova Systems is building new software language and hardware flexible enough for artificial intelligence applications to run at scale. The talk below gives a nice overview of what the SambaNova team is building.
SambaNova hasn’t released its products yet, so for the time being the neural network solution the researchers came up with works. It will be really cool to see the impact a software and architecture able to handle quantum systems simulations has on their research. That convergence could be really powerful. I, for one, would love to see how much has changed from when I would ask my mom if I could look at something in the electron microscope.
AWS Announces General Availability of Amazon Textract (Amazon)
Amazon is making widely available a text and data extraction tool that is going to make it real easy to search all kinds of information. My whole time reading this release all I could think about was how hard it is to search your own posts on Twitter. There’s no excuses now, Jack Dorsey.
Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age (Freedom to Tinker)
The incorporation of artificial intelligence in welfare programs comes with a range of risks. The last section of this piece is critical, taking a human centered design approach with a focus on those most at the mercy of artificial intelligence technology operating portions of welfare programs. That kind of approach would be helpful in avoiding unintended consequences over the course of developing AI tools for these populations.
Niti Aayog plans index to rank states on artificial intelligence adoption (Economic Times)
An index rating the readiness of states across India to adopt artificial intelligence technologies will be very interesting to read. Considering the U.S. has states still struggling with voting machine technology, this sort of index would be very eye-opening for policy makers in this country.
There’s been a lot of conversation around the importance of making sure there’s diversity among developers and researchers who are building artificial intelligence. One of the primary concerns is that this diversity imbalance will lead to bias being built into algorithms. The proposed remedy is for teams to hire more minorities on their teams.
I’m a strong proponent for diversifying the makeup of those building artificial intelligence. But, what does one do if the minorities one hired to join a team continue proliferating bias in their code, bias that hurts them?
I recently heard a story about a group of minority developers building AI algorithms that wouldn’t have recognized their skin color. I don’t think this is something that has gotten much of any attention, but it’s something worth examining. Just because someone is a minority doesn’t mean they haven’t digested tropes about white people and those of color. Are these folks in the right mental space to think independently and proudly? Do they see color or pretend not to?
I’m going to be pondering this more, but wanted to put the thought out there. What frameworks do have in your backpocket for thinking through issues like this?