Return to site

AI And Automation In Practice 

As Usage Takes Off, Policies Are Beginning To Follow

There was some really excellent content and interactive discussions at Digital Nation Australia’s “Digital As Usual: AI, Automation and Analytics” breakfast event in Sydney earlier this week.

The 2+ hour event was centered around multiple panel sessions and featured a solid cross-section of industries, including: technology, transportation, FSI, education, and entertainment, among others.

Given how top-of-mind these topics are, GenAI in particular, it’s worth unpacking some of the key points discussed.

As you’ve certainly noticed, the noise around AI is deafening, but that’s because it really is everywhere. A recent TRA study, commissioned by Datacom, found that nearly 90% of ANZ organisations are currently using AI in some form, although the usage varies widely.

 

broken image

 

This vast difference in AI usage was reflected among panelists as well. Conga, for instance, has added AI-enabled functionality to its existing SaaS-platform so lawyers can more efficiently scan, structure and consume large volumes of unstructured data and reports. Transurban leverages AI and computer vision to analyse number plates, key to the firms’ core billing processes.

As AI usage explodes, implementing clear policies has become crucial

This is a good time to reinforce a key point; the use of AI and automation does not (and will not) eliminate the need for human involvement. At Transurban, staff in the Philippines perform regular quality checks to oversee processes and ensure accuracy of results. The firm also has very clear data retention policies in place to ensure privacy and mitigate risk. For Conga, delivering AI-enabled capabilities for lawyers to redline documents will never be fully automated, since lawyers still need complete access and oversight.

Firms across ANZ increasingly understand the need for AI-related policies, but as results from TRA’s study show, it’s still early days.

 

broken image

 

It’s not at all surprising that the top action currently among ANZ-based firms is providing adequate training and education to staff. More specifically, overcoming concern, misunderstanding and potential misuse through a combination of policy and education.

GenAI in particular provides opportunities to ramp up expertise and streamline learning via more efficient searching and summarising of large amounts of data. But panelists highlighted several key requirements to minimise risks:

Implement sufficient oversight and review. This is particularly critical among more junior staff, or those with limited professional experience or process expertise. Multiple attendees highlighted a clear and growing risk of staff exploiting tools like ChatGPT, Bard, Bing AI, etc. and claiming output as their own; something that academia is already confronting.

Encourage more senior staff to embrace the technology. Directly address employees’ very valid concerns over job security and obsolescence, specifically AI-powered automation eliminating their jobs. Minimise fear and resistance by enabling staff with knowledge and professional expertise to regularly vet AI-driven output and decisions and validate against real-world situations and experience. There really is no alternative. As one speaker noted, staff need to clearly understand that while “AI won’t replace you, someone using AI will.”

Manage data sources and ensure data quality. As one panelist noted, “you cannot have great AI without great data”. Lendlease Podium, for example, provides data and insights to the property industry, consuming data from a large number of external sources, including sensors from smart buidlings. Managing the use of synthesized and generated data to augment existing internal data is therefore critical. That means not only addressing reporting, compliance and privacy considerations, but also identifying the provenance of data, and in particular what data has been generated by AI.

A final thought...

This is clearly a massive topic, and the best we can hope for out of a 2+ hour discussion is to highlight some common concerns, and opportunities. With that in mind, I wanted to leave you with one final thought that sums up much of the content and discussion points; AI-enabled automation isn’t about efficiency, it’s about improved value delivery. Two use-cases from the event highlighted that point.

Humantix, a Sydney-based not-for-profit event ticketing platform, is leveraging AI-enabled automation to improve fraud detection via automated analysis of purchase patterns, which is key to maintaining ongoing customer trust. But the firm is also looking at automating processes to personalise experiences and make recommendations based on past purchases, linking customers to upcoming events they may be interested in.

Cranbrook Schools is automating key workflows, starting with an overhaul of enrolment and onboarding processes for new students. Through automation and the use of APIs for things like digital identity checks, they’ve managed to reduce a heavily manual 2 hour process down to about 10 minutes, freeing staff to spend far more time doing what they do best, engaging directly with students.

As always, I’d love to hear your thoughts, and continue this discussion. You can reach me at michael@techresearch.asia.