Insights
SPLA Compliance: why it matters more than ever
Microsoft has announced price increases for the SPLA licensing program. Find out which products are affected and how Crayon can help you adapt before new prices take effect in January, 2025.
The rapid advancement and availability of AI technologies have meant companies have been keen to jump on the bandwagon to adopt and capitalize on them. However, it has been demonstrated that Generative AI can throw up ‘untrustworthy’ results which can be plain wrong and biased. Such outcomes can harm reputations, generate bad press and break the trust of investors and consumers.
Governments around the world are keen to set up guidelines for responsible AI. In the UK, for example, it has created an Office for AI which is responsible for encouraging safe and innovative Use of AI. Elsewhere there is a joint US-EU initiative to draft a set of voluntary rules for businesses, called the “AI Code of Conduct”, in line with their Joint Roadmap for Trustworthy AI and Risk Management.
In APAC, Singapore has been instrumental in spearheading responsible AI adoption. Also, Indonesia and Thailand have made strides in government AI readiness, echoing their recent implementation of national AI strategies. Japan’s new AI guidelines are a big step towards responsible AI use, stressing the importance of avoiding biased data and promoting transparency. Nations such as New Zealand and Australia, South Korea, India, and others are drawing up governance and ethics guidelines, some voluntary, and some are taking steps towards legislation. In December 2023, the EU published its AI Act: deal on comprehensive rules for trustworthy AI1. More recently in Australia, the Federal Government issued an open edict that it will set mandatory guard-rails in place for the use of AI in business and industry, which could include outright prohibition of ‘high-risk’ AIs.
Across the international community, governments and businesses are striving towards trustworthy AI at both macro and micro levels. Developers, technology service providers and systems integrators in the APAC region should anticipate accelerated regulatory review of development practices, adoption, implementation and use of AI in business and industry.
For many of our partners, the initial foray into AI may start with Generative AI and Large Language Models. In our previous post, we explored the concepts of Fair AI and responsible use of AI. In this update, we dive into Trustworthy AI, what it comprises and Crayon’s blueprint for ethical and explainable Large Language Models (LLMs).
Let’s look at how trustworthy AI relates to, and differs from Fair AI, Responsible AI and Safe AI.
Fair AI: This is a subset of trustworthy AI, specifically addressing the need for AI to be unbiased and to treat all users equitably. While fairness is a critical aspect, trustworthy AI also requires the system to be reliable, lawful, ethical, and robust.
Responsible AI: Responsible AI is a broader category that includes fairness but also extends to ethical obligations, societal impacts, and ensures that AI acts in people’s best interests. Trustworthy AI encompasses all these aspects of responsible AI but also implies a level of trust from users that the AI system will behave as expected in a wide range of scenarios and over time.
Safe AI: Safety is another essential facet of trustworthy AI, emphasizing the need for AI systems to operate without causing unintended harm. Trustworthy AI takes this further by not only ensuring that the AI doesn’t cause harm but is dependable, maintaining data privacy, and is secure against attacks or failures.
While fair, responsible, and safe AI are components that contribute to the trustworthiness of an AI system, trustworthy AI is the overarching goal, ensuring that AI systems are designed and deployed in a manner that earns the confidence of users and the general public and is worthy of that trust.
Trustworthy AI can be considered an umbrella term that implies an AI system is:
We want to ensure that AI systems are worthy of our trust and here at Crayon we underscore the importance of trustworthy AI by integrating ethical considerations into the development and deployment of LLM-based solutions in all AI projects we deliver.
The concerns surrounding large language models (LLMs) as “black boxes” with opaque decision-making processes are well-founded, considering their intricate architectures. Nonetheless, advancements in the field lead to increased explainability and control over these models.
A case in point is Crayon’s approach. Crayon conducts research in AI explainability. By adopting emerging methods and tools, insights into LLM behaviour are enhanced. For instance, techniques analogous to those used in medical diagnosis tools for identifying influential image parts could be adapted to pinpoint text segments in LLMs affecting certain responses.
The ability of LLMs like ChatGPT to self-explain can be exploited to comprehend the reasoning behind their responses. For instance, in sentiment analysis, LLMs might identify and enumerate the sentiment-charged words shaping their conclusions, providing a transparent layer, and facilitating comparison with traditional methods like LIME saliency maps.
Rigorous assessment of these self-explanations against established methods (e.g. SHAP, LIME) validates their efficacy. Metrics are employed to gauge the faithfulness and intelligibility of these explanations. And, acknowledging the dynamic nature of AI systems, including LLMs, Crayon commits to perpetual monitoring and enhancement of explainability features, revisiting current interpretability practices in response to LLM advancements.
We embrace user-centric design and believe it is imperative to ensure that explanations are accessible and comprehensible, catering to various expertise levels, from laypeople to AI specialists.
Further, Crayon integrates ethical considerations into the development and deployment of LLM-based solutions, ensuring transparency about AI explanations’ limitations and actively addressing biases and potential harms.
We have adopted an “Explainable by Design” strategy, employing specific paradigms according to the problem context. These include:
Crayon’s strategic use of these paradigms maximizes LLMs’ inherent controllability and explainability, ensuring effectiveness, transparency, and trustworthiness. This proactive stance integrates explainability into LLM deployment’s core. Implementing these strategies not only enhances LLMs’ explainability and control but also yields considerable business benefits:
Integrating explainability and control in LLMs aligns with ethical AI practices and offers substantial business advantages, including risk mitigation, user trust enhancement, appropriate solution deployment, competitive edge, and regulatory compliance, are all crucial for the sustainable and responsible growth of AI-driven enterprises.
Incorporating explainability and control in large language models (LLMs) aligns with the global shift towards trustworthy AI, emphasizing transparency and fairness. This approach not only builds trust among stakeholders in sectors where clear, justifiable decision-making is crucial, like healthcare and finance, but also addresses the increasing social awareness of technology’s impact.
Explainable AI plays a key role in identifying and correcting biases, ensuring fair and equitable operations as a significant step towards trustworthy AI. This commitment reflects a deep understanding of AI’s societal context to address society’s concerns, aims to comply with what organizations and governments are legislating for, and demonstrates a dedication to responsible innovation and sustainable business practices, essential for maintaining operational legitimacy and social license.
The appetite for AI is high and partners are on the frontline of educating customers about the right ways to think about the value of AI in their businesses.
Having a trusted AI sherpa to guide your own learning journey is essential and that’s where Crayon offers a unique difference to our partners.
Far beyond the cloud solutions distribution and licensing expertise we offer, Crayon partners benefit from nearly a decade worth of frontline, hands-on AI solution development.
Crayon is an Azure OpenAI Global Strategic Partner with two Data and AI Centers of Excellence (COEs) including one based in Singapore. We are the only services company with cloud distribution capabilities across the APAC channel to have over 300 applied AI projects in market, which includes more than 2,500 models running on a proprietary accelerated MLOps framework.
Our work in this field is governed by the Crayon Responsible AI Guidelines (CRAIG). CRAIG grounds our services around sustainability, ethics, trust, robust engineering, and security. It establishes the in-depth policy mechanisms of our genuinely responsible AI organization. Our sales and delivery processes have tight integration of this policy. So too in our governance and support structures, and knowledge dissemination.
This is material for any partner that is using, integrating or implementing AI in their own business or for their customers, and wants absolute confidence that their distributor of choice understands the nuanced perspectives involved.
If you are exploring the future of AI as a practice in your business and want to know more about the Crayon Responsible AI Guidelines (CRAIG), reach out to our Technology Advisory Group.
This post was originally published on Crayon.
Insights
Microsoft has announced price increases for the SPLA licensing program. Find out which products are affected and how Crayon can help you adapt before new prices take effect in January, 2025.
Vendor Announcements
Microsoft has announced price increases for the SPLA licensing program. Find out which products are affected and how Crayon can help you adapt before new prices take effect in January, 2025.
Webinars Series
Walk through the features of VMware Cloud Foundation and why it is a key tool for Crayon’s cloud partners.
Webinars Series
Tune into our latest CSP Updates session for important changes to pricing, promotion and discount offers and a focus on Secure Productivity with MDR ContraForce.
Training
Find out how our in-house Azure expertise helps you to leverage the full potential of the Azure Migrate and Modernise program.
Training
Which Wasabi consumption model is the right fit for various scenarios? We explore the options.
Insights
With Cybersecurity Awareness Month now over, it is time for partners to gather up learnings and plan their 2025 cybersecurity go-to-market strategies. Support your game plans with our top picks of new and updated risk and resilience resources.
Case Studies
Cytrack Intelligence Systems founder, Nick Milan talks through why the Crayon ISV Innovation Hub program is the right fit for his business objectives.
Guides and eBooks
Dive into the detail of planned cloud adoptions for SMBs across the APAC region and where they need help from their tech service partners.
Podcast
Dr. Joe and his guests discuss how high-performance culture helps partners to position on value, earn trust and build stronger customer relationships.
Sales and Marketing
Pricing models, service definitions and competitive accelerators. Our latest webinar breaks down how to build a successful MSSP business.
Engage
Running builds, support and pre-sales for Microsoft? The CSP Masters Program is for you! get the details for the Sydney enablement workshop in October.
Blogs
From scrappy start-up to part of a global cloud powerhouse, disruption is in our DNA.
Guides and eBooks
Partners, get your Bond on! Our Cyber Operatives Field Guide breaks down five cybersecurity missions to foil would-be cybercriminals.
Company Announcements
Webinars Series
Press Release
Blogs
Blogs
Top 5 most common problems low-code solves for SMBs, what the low-code revenue growth potential for MSPs is, and why now is the time for MSPs to enter the low-code market.
Our latest Cloud Horizons eBook looks at a robust review of cloud tech's past, present, and future, value generation insights, and pathways to cloud profit for MSP’s.
Our APAC channel business is now part of a global organisation. That means there is a whole new world of value on offer for our partners. We can help you to tap into all of it.