At MAX, Adobe's annual conference that brought people from creative industries together under one roof, one widely discussed theme was how generative artificial intelligence could assist users in their creative process rather than replacing jobs. The new AI tools that Adobe showcased during the three-day-long conference, which ended on Thursday in Los Angeles, made headlines for being transformative, but many also questioned the pitfalls associated with Gen AI technology. "The iPhone moment, in hindsight, was massively transformative to our society in both good and bad ways. We're still trying to figure out the right way to adopt mobile technology, internet technology, cloud technology, and every one of those waves of innovation and disruption. Gen AI will have as big an impact, but I think we will also continue to wrestle with all the questions about how we want to adopt this," said Ely Greenfield, CTO of Digital Media at Adobe, in an interview with indianexpress.com. The buzz around generative artificial intelligence has been high as tech companies, including Adobe, continue to roll out AI-related features in their core products, such as Photoshop and Express. "Over the past six months, since we released Adobe Firefly, our stock business is up in terms of the number of contributors and the amount of content contributed and content licenses, which means more money in the pockets of our contributors. It has not had a negative impact at all," Greenfield said while addressing the Asian press. Earlier this year, Adobe launched a generative AI tool called Firefly, which allows users to edit images simply through typed commands. This year at MAX, many of the latest announcements from Adobe focused on artificial intelligence, particularly how Firefly will work with the company's Creative Cloud apps, Photoshop, Illustrator, InDesign, and Premiere Pro. In fact, Adobe’s Gen AI imaging tool has generated over 3 billion images since March. Critics, however, are now calling out Big Tech for the rising costs needed to run these Gen AI tools, while there is also a growing call for regulation, a hot topic in Washington in recent months. "The cost of running the models that we built is actually much less expensive than the large language models, partially because of a different architecture and partially because they are smaller," Greenfield said. Although he did not disclose the cost to develop and maintain the software, Greenfield did mention that the company is actively investing in the technology to make it more efficient in the future. "It's not the cost," he continued. "Just last week, as we were finalizing the Firefly Image 2 model, one of our technologists found a way to run it at twice the speed at the same cost. So, speed and cost tend to be opposite sides of the same coin. We could run it slower at a higher cost. In this case, we chose to run it faster at the same cost and pass the benefit to users.” Generative AI models such as Adobe’s Firefly are expensive to build and run. They require specialised chips, data servers, computing power and the best of the engineers. “It's early days in this technology, which means we see lots of quick returns. The cost is coming down quickly,” he said. One of the ways Adobe is looking to bring the cost down to run these imaging models is by taking a hybrid model route. “We are looking at bringing these models into a hybrid world that they run partially on the device and partially in the cloud,” says Greenfield. “A lot of our customers and creative professionals use powerful machines. We love to be able to take advantage of those and be able to provide them even cheaper and more accessible easier ways to use this technology using their local device, and then maybe using the cloud for the real high quality, high definition renditions,” he explains. The writer was in Los Angeles on the invite of Adobe