CVPR 2026 Workshop — Denver, Colorado
Abstraction
Heritage
NarrativeThe 3rd AI for Visual Arts workshop at CVPR 2026 explores how computer vision perceives, interprets, and models abstraction across artistic and cultural imagery—paintings, comics, sculptures, and installations that challenge assumptions of conventional models.
These stylized domains expose weaknesses in robustness, generalization, and interpretability, providing a principled framework for evaluating perception beyond realism. Can models "see" abstraction like humans, recognizing the dog-ness in a Picasso or inferring motion from comic lines?
Advance understanding of robust segmentation, saliency, and depth estimation under stylization and abstraction. Evaluate how vision models interpret abstraction and generalize beyond photorealistic imagery.
Benchmark and discuss authenticity, watermark resilience, and provenance in AI-assisted artistic processes. Examine how vision systems can detect, trace, and validate transformations in creative content.
Bridge creative and analytical domains—connecting artists, computer vision scientists, and curators to define trustworthy, human-aligned AI tools for creative and heritage contexts.
Friday, 11:59 PM AoE
Announced
Monday — for accepted full papers only
Friday — final papers and forms
Monday, 11:59 PM AoE
Thursday, 11:59 PM AoE — no camera-ready required
Competition opens on CodaBench
Final submission deadline
AI4VA @ CVPR 2026, Denver
Novel, previously unpublished research. Accepted papers appear in the official CVPR 2026 Workshop Proceedings.
New, previously, or concurrently published research or work-in-progress. Will not appear in proceedings.
All posters: portrait orientation, max 90 × 180 cm. All accepted papers present a poster. Selected oral spotlight presenters: 7 min talk + Q&A, plus poster presentation.
The AI4VA Image Composition Challenge (PortraitCraft) features two competitive tracks that push the boundaries of AI-driven portrait composition understanding and generation. Participants are invited to develop novel methods for analysing and composing portrait imagery in artistic contexts.

Image CompositionPortrait Understanding & Generation
PortraitCraft is a benchmark dataset for portrait composition understanding and portrait composition generation. It is designed to support models in learning and evaluating composition quality in portrait images. The dataset focuses on images with humans as the primary subjects and covers a wide range of scenarios, including single-person and multi-person portraits, as well as half-body and full-body compositions. It emphasizes key composition factors such as subject prominence, pose quality, image layout, and overall visual atmosphere.
PortraitCraft supports two task directions: portrait composition understanding and portrait composition generation. The dataset is constructed through large-model-assisted filtering combined with evaluation by professional designers, ensuring high-quality samples and fine-grained composition annotations.
Key dates for the PortraitCraft challenge. See the workshop timeline for full paper and workshop dates.
The challenge is organized into two complementary tracks. Participants may choose to focus on structured analysis of existing portraits, generation from composition-oriented specifications, or both.
Given a portrait image, predict the overall composition quality score, provide fine-grained attribute judgments, and answer a challenging visual question.
CodaBench: Track 1 competition →Given structured composition descriptions, generate portrait images that accurately realize the specified layout and aesthetic intent.
CodaBench: Track 2 competition →Participate: Track 1 on CodaBench
Portrait Composition Understanding aims to evaluate a model's ability to understand portrait composition in a structured and interpretable way. Given a portrait image, participants are required to produce three types of outputs: a predicted overall composition score, ternary judgments on dozens of predefined fine-grained composition attributes, and an answer to a carefully designed visual question that tests detailed understanding of the image content. Unlike traditional aesthetic assessment tasks that focus only on global quality prediction, this track emphasizes both attribute-level composition analysis and fine-grained visual comprehension. The goal is to encourage models that not only estimate how good a portrait is, but also explain why it is good and demonstrate that they truly understand the image.
In this track, the model takes a portrait image as input and is required to perform structured composition analysis. The task consists of a training stage and a testing stage.
(1) Training stage. The training data for Track 1 are provided in the form of image-text pairs. Each training sample consists of a portrait image and a corresponding text description. The text includes an overall composition score, the scores of 13 composition attributes, and explanations of these attribute-level judgments.
(2) Test stage. During testing, the model receives a single portrait image as input and is required to produce three types of outputs:
Together, these three outputs assess global composition evaluation, attribute-level composition reasoning, and detailed visual understanding within a unified framework.
Submissions will be evaluated using a unified score that jointly considers performance across all three required outputs. The exact weighting and implementation details of the evaluation protocol are not disclosed to ensure fairness and robustness of the benchmark.
The pair below illustrates an input portrait alongside a visualization of annotation results, highlighting the kinds of fine-grained composition cues the challenge considers.


Participate: Track 2 on CodaBench
Portrait Composition Generation evaluates a model's ability to generate portrait images from structured composition-oriented descriptions. Participants are provided with training data consisting of portrait images paired with composition-focused annotations. At test time, only the structured descriptions are given, and models must generate corresponding portraits. This track emphasizes whether models can accurately interpret and realize composition requirements in generated portraits.
In this track, the model takes structured composition-oriented descriptions as input and is required to generate corresponding portrait images. The task consists of two stages:
(1) Training stage. Participants are provided with portrait images paired with structured textual descriptions that focus on composition and aesthetic attributes such as subject placement, spatial organization, visual center, negative space, and overall composition style.
(2) Test stage. During testing, only structured composition descriptions are provided. Participants are required to generate portrait images that reflect the specified composition requirements.
The final score is computed based on the consistency between the generated images and the target structured composition descriptions. The evaluation focuses on whether the generated results accurately reflect the key composition and aesthetic characteristics specified in the descriptions.
Figure 3 shows a reference portrait; Figure 4 shows an image regenerated from a structured composition description.


Full-body portrait photograph of a young woman dancing gracefully on a long wooden pier extending into shallow turquoise sea water at sunset.
Strong central perspective composition. The pier forms leading lines toward the horizon. The subject is positioned near the center axis of the frame. She stands on one foot with the other leg lifted and bent, arms extended outward in an expressive balanced pose.
A soft, faint rainbow appears diagonally across the sky from the upper-right corner to the lower-left region of the frame, forming a subtle diagonal compositional structure that enhances visual flow without dominating the scene.
…
Awarded to the highest-quality full paper submission demonstrating novelty, rigour, and impact at the intersection of AI and visual arts.
Awarded for an outstanding poster presentation that effectively communicates innovative research to the workshop audience.

Cornell University & Cornell Tech
Assistant Professor in Computer Science. Her research combines images, language and 3D geometry for building multimodal perception systems that can handle the full complexity of the real 3D world.

University of Washington & Allen Institute for AI
Assistant Professor at the Allen School. Co-directs the RAIVN lab at UW and directs the PRIOR team at AI2. His research lies at the intersection of computer vision, NLP, robotics, and HCI.

Curator & Researcher, Creative AI
Curator, producer and researcher specialising in AI in the creative industries. Honorary Senior Research Fellow at UCL Centre for Artificial Intelligence.
TBC
The fourth keynote speaker will be announced soon. Please check back for updates.
Lead OrganizerAssistant Professor, University of Bath, UK (Publicity Co-Chair, CVPR)
We gratefully acknowledge the generous support of our sponsors who make this workshop and its awards possible.

Interested in supporting the intersection of AI and visual arts? We welcome partners who share our vision for trustworthy, human-aligned AI in creative domains.
Get in Touch