counter easy hit

Apple Working on ‘Ferret UI’ AI Model That Can Understand iPhone UI

Apple Working on ‘Ferret UI’ AI Model That Can Understand iPhone UI
7

Apple researchers have published yet another paper on artificial intelligence (AI) models, and this time the focus is on understanding and navigating through smartphone user interfaces (UI). The yet-to-be peer-reviewed research paper highlights a large language model (LLM) dubbed Ferret UI, which can go beyond traditional computer vision and understand complex smartphone screens. Notably, this is not the first paper on AI published by the research division of the tech giant. It has already published a paper on multimodal LLMs (MLLMs) and another on on-device AI models.

The pre-print version of the research paper has been published on arXiv, an open-access online repository of scholarly papers. The paper is titled “Ferret-UI: Grounded Mobile UI Understanding with Multimodal LLMs” and focuses on expanding the use case of MLLMs. It highlights that most language models with multimodal capabilities cannot understand beyond natural images and are functionality “restricted”. It also states the need for AI models to understand complex and dynamic interfaces such as those on a smartphone.

As per the paper, Ferret UI is “designed to execute precise referring and grounding tasks specific to UI screens, while adeptly interpreting and acting upon open-ended language instructions.” In simple terms, the vision language model can not only process a smartphone screen with multiple elements representing different information but it can also tell a user about them when prompted with a query.

ferret ui Ferret UI

How Ferret UI processes information on a screen
Photo Credit: Apple

Based on an image shared in the paper, the model can understand and classify widgets and recognise icons. It can also answer questions such as “Where is the launch icon”, and “How do I open the Reminders app”. This shows that the AI is not only capable of explaining the screen it sees, but can also navigate to different parts of an iPhone based on a prompt.

To train Ferret UI, the Apple researchers created data of varying complexities themselves. This helped the model in learning basic tasks and understanding single-step processes. “For advanced tasks, we use GPT-4 [40] to generate data, including detailed description, conversation perception, conversation interaction, and function inference. These advanced tasks prepare the model to engage in more nuanced discussions about visual components, formulate action plans with specific goals in mind, and interpret the general purpose of a screen,” the paper explained.

The paper is promising, and if it passes the peer-review stage, Apple might be able to utilise this capability to add powerful tools to the iPhone that can perform complex UI navigation tasks with simple text or verbal prompts. This capability appears to be ideal for Siri.


Affiliate links may be automatically generated – see our ethics statement for details.

Comments are closed, but trackbacks and pingbacks are open.