
Insights
Tech Buying Budgets for SMBs on the Rise
SMBs across the APAC are not just increasing their technology investments—they are making strategic, forward-thinking moves to position themselves for long-term growth.
News headlines broke recently around a ‘comprehensive survey of Australian chief executives, chief technology officers and other leaders of companies using AI’ which reported most had immature or “developing” approaches to using AI responsibly.
As a result, Government is accelerating its plans to introduce mandatory guard-rails for the use of AI and this could include outright prohibition of ‘high-risk’ AIs.
This news is a timely reminder of why your choice of technology distributor matters. Far beyond the cloud solutions distribution and licensing expertise we offer, Crayon partners benefit from nearly a decade worth of frontline, hands-on AI solution development.
Crayon is an Azure OpenAI Global Strategic Partner with two Data and AI Centers of Excellence (COEs) including one based in Singapore. We are the only services company with cloud distribution capabilities across the APAC channel to have over 300 applied AI projects in market, which includes more than 2,500 models running on a proprietary accelerated MLOps framework.
This is material for any partner that is using, integrating or implementing AI in their own business or for their customers.
In the wake of this recent news and the increased regulatory scrutiny that will follow, we encourage partners to read the following blog penned by our Chief Data Scientist, David Mosen. It is an insightful look at the issues of fair and responsible AI development and use, and why Crayon has embedded ethical principles into our AI practice.
Artificial Intelligence (AI) solutions can be invaluable tools for efficient decision-making. They must also make good decisions. When they do not, the consequences can harm lives, and damage businesses. Known examples include AI systems that leak information or amplify social discrimination. Such events increase public distrust of the companies involved, and to AI as a field. AI systems for decision-making must be led by two critical imperatives: fair by design, and responsible use.
When we talk about fairness in AI, we mean adhering to a set of principles and practices. These guide both design and implementation. They aim to ensure AI technologies are trustworthy, ethically sound, and socially responsible. In essence, fairness in AI means being clear in how people who use AI technologies will benefit. It means being sure of the protections for those subject to decisions made by AI systems.
It starts with data. It is critical the data used to train AI algorithms is representative of the population as a whole. If it is not, AI decision-making can perpetuate bias or discrimination.
It must include people who have experienced bias and discrimination. Their seat at the table is vital to fair design and responsible use of AI technologies.
Personal information must be anonymized. AI design and development should leverage specific algorithms, evaluation metrics and preprocessing techniques designed to avoid bias. It’s also important to think about data labelling, since this can also introduce bias.
For example, let’s say you are training an AI model to recognize faces. The training data must include diverse ethnicities, genders, and ages. The AI team training the model should consult with a diverse group to ensure the dataset is genuinely representative. They should be well versed in using bias mitigation techniques, such as masking sensitive attributes that can lead to bias when an AI is assessing new data.
Clearly, eliminating the potential of harm to under-represented people in the community is of critical help to society. It’s also important that advantages are not unduly created for others merely based on ethnicity or gender, age, sexual orientation or otherwise. This alone is a more than sufficient reason to adhere to fair AI principles and practice.
For users, fair AI helps ensure they get the most accurate and unbiased results from AI systems. It also considers individual user needs and preferences. This allows for more natural and personalized interactions, and a more humanized experience.
For organizations, fair AI provides solid foundations for sustainable, ethical business models. It shows both awareness of concern, and a commitment to making a positive difference. This builds brand trust, loyalty and repeat business. As a practice, fair AI is a prudent risk management measure that helps to reduce legal liability.
A responsible AI framework must consider not only fairness, but also technological soundness and trust.
A fair AI system avoids bias in decision-making. It must neither advantage nor disadvantage people based on protected characteristics. To be responsible, an AI system must be technologically sound. This means secure and accurate, reliable, and explainable.
Crayon operates a Center of Excellence for Data and Artificial Intelligence Services (Data and AI CoE). This is a global practice hub serving data-driven businesses worldwide. Our reference framework for the responsible development of AI in the company is CRAIG – Crayon Responsible AI Guidelines.
CRAIG grounds our services around sustainability, ethics, trust, robust engineering, and security. It establishes the in-depth policy mechanisms of our genuinely responsible AI organization. Our sales and delivery processes have tight integration of this policy. So too in our governance and support structures, and knowledge dissemination.
Crayon aims to be at the forefront of the ongoing AI revolution. Our commitment to ethical, fair, and responsible AI is not only essential to that vision, it is an active demonstration of our core values.
Insights
SMBs across the APAC are not just increasing their technology investments—they are making strategic, forward-thinking moves to position themselves for long-term growth.
Podcast
Crayon's Global Lead for Hosting Partners, Andreas Bergman joins Microsoft's Hybrid Cloud Partners podcast to share cloud trends from around the world.
Insights
Small to medium-sized businesses in the APAC region are gearing up tech investments to drive outcomes for customer experience, revenue, business adaptability and innovation. How will SMBs leverage emerging technologies to achieve their strategic objectives?
Insights
What are the most critical business objectives and solution adoption priorities for SMBs in our region? Download the latest Forrester study to find out!
Case Studies
Working with Crayon, AfterDark scaled its ability to build longer-term cybersecurity engagements with customers.
Vendor Announcements
Microsoft changes to its licensing programs means some customers will not be able to renew EAs. Find out how to transition them to CSP in hours, with no disruption or upfront cost, only with Crayon.
Business
Seven smart strategies to fully leverage Solutions Partner Designation benefit portfolios.
Insights
With Cybersecurity Awareness Month now over, it is time for partners to gather up learnings and plan their 2025 cybersecurity go-to-market strategies. Support your game plans with our top picks of new and updated risk and resilience resources.
Guides and eBooks
Partners, get your Bond on! Our Cyber Operatives Field Guide breaks down five cybersecurity missions to foil would-be cybercriminals.
Company Announcements
Webinars Series
Press Release
Blogs
Blogs
If you want to learn more about emerging ERP opportunities, download Crayon’s eBook
What are the most critical business objectives and solution adoption priorities for SMBs in our region? Download the latest Forrester study to find out!
Our APAC channel business is now part of a global organisation. That means there is a whole new world of value on offer for our partners. We can help you to tap into all of it.