Ethical and transparent AI

2 mins read

Recently, our newsletter featured a link to a post by ex-OpenAI contributors Ilya Sutskever, Daniel Gross, and Daniel Levy, announcing their new company focused on the safety of artificial intelligence.

Their online statement emphasises that this new venture will prioritise safety, security, and progress insulated from short-term commercial pressures.

However, this narrative sounds familiar. OpenAI was originally founded as a non-profit organisation to compete with Google’s DeepMind, with a mission to develop "safe and beneficial" artificial general intelligence (AGI) transparently. The inclusion of "open" in OpenAI's name underscored this commitment.

Is non-profit the only solution?

Yet, OpenAI didn’t remain a non-profit for long, and Elon Musk was the first to leave, citing contradictions with the initial agreement.

While Ilya Sutskever and his colleagues are likely driven by genuine passion and motivation to deliver safe AGI, they are not the first to voice concerns about the dangers and lack of transparency in rapid AI development. In the past, we’ve seen open letters from Anthropic on AI safety and calls from ex-OpenAI and DeepMind employees for greater transparency and protections for whistle-blowers. Mustafa Suleyman has also spoken about the risks of generative AI.

Why do I bring this up? First, I want to express my wholehearted support for initiatives to make AGI safe, transparent, and secure. The importance of these efforts is undeniable.

However, history shows that without commercialising their solutions, such initiatives might struggle to succeed.

As solution integrators, we also have a responsibility to ensure the best outcomes for our clients. We cannot rely solely on others to do the work for us or implement AI thoughtlessly without validating that our solutions won’t cause data loss or breaches for our clients.

As software engineers, we are the link between end clients and technology providers. Our responsibility to our clients is to present each possible solution along with its pros and cons impartially, always prioritising their interests and security.

Our approach

At co.brick, we continually ask ourselves how to prepare for unexpected situations or cases where the software we develop on top of commercial LLMs might cause harm or violate local laws.

For instance, when working on DocMarker, our proof of concept for law assistants, we interviewed lawyers and legal advisors to gauge their reactions to our solution. This process revealed numerous questions we hadn’t previously considered. Understanding the limitations of data protection from a local legal perspective (Poland), we refined our list of LLMs and planned our model training accordingly.

Without close cooperation, idea exchange, and self-challenge, we would produce yet another piece of software lacking real value or benefit. These interviews and workshops not only educated us but also fostered a better understanding among our partners and clients. This is how I see real progress towards safety and transparency being made. You could call it grassroots, but it starts in the middle, within software houses and agencies on the front lines of new technology implementation.

Conclusion

While it’s encouraging to see prominent figures advocating for safe and transparent AGI, we, as software engineers, must also champion these practices. By doing so, we can make our clients and the broader society aware of the limitations of current AI models and advocate for best practices at a practical level.


Enjoyed this article? Don't miss out! Join our LinkedIn community and subscribe to our newsletter to stay updated on the latest business and innovation news. Be part of the conversation and stay ahead of the curve.

Lukasz Pietraszek
Delivery Manager

Lukasz Pietraszek

Having recently transitioned into the role of Delivery Manager, I bring a blend of technical expertise and a commitment to driving value and fostering cohesive teamwork in project delivery.

Transform your business with our expert


Transform your business with our expert
AI/ML Consulting Services
Seamless Integrations
Team augmentation
Let's talk