The development of AI is creating new opportunities to improve the lives of people around the world, from business to healthcare to education. It is also raising new questions about the best way to build fairness, interpretability, privacy, and security into these systems.
These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices. As with all of our research, we will take our latest findings into account, work to incorporate them as appropriate, and adapt as we learn more over time.
General recommended practices for AI
Reliable, effective user-centered AI systems should be designed following general best practices for software systems, together with practices that address considerations unique to machine learning. Our top recommendations are outlined below, with additional resources for further reading.
Recommended practices
Use a human-centered design approach
Identify multiple metrics to assess training and monitoring
When possible, directly examine your raw data
Understand the limitations of your data-set and model
Continue to monitor and update the system after deployment