Visualization
Join Tianna Uchacz, Ph.D., assistant professor in Visualization, and Sophie Pitman, Ph.D., University of Wisconsin-Madison, in a discussion about their work with a curious 18th-century French-Flemish manuscript in UW-Madison’s Special Collections. The translation of the manuscript reveals instructions for how to dye textiles and paper to create artificial flowers.
This public presentation on the reconstruction of historical textiles and fashion by the Glasscock Center’s Visiting Fellow Dr. Sophie Pitman (UW-Madison) features the opportunity to try Renaissance fabric finishing techniques. The textile researcher will explain what we can learn from incorporating hands-on experimentation into our archival, literary and visual analysis.
This student-run event is the 33rd-annual showcase of Visualization students’ work from the past year, including a gallery exhibition of physical works and a screening of time-based works.
This student-run event is the 33rd-annual showcase of Visualization students’ work from the past year, including a gallery exhibition of physical works and a screening of time-based works.
“Floriography” is a multidisciplinary performance featuring live music, dance, projection, robotics and interactive installations. The program includes works for violin, marimba and electronics, alongside student- and faculty-created visual and spatial designs from the College of Performance, Visualization and Fine Arts and the College of Engineering.
“Floriography” is a multidisciplinary performance featuring live music, dance, projection, robotics and interactive installations. The program includes works for violin, marimba and electronics, alongside student- and faculty-created visual and spatial designs from the College of Performance, Visualization and Fine Arts and the College of Engineering.
This student-run event is the 33rd-annual showcase of Visualization students’ work from the past year, including a gallery exhibition of physical works and a screening of time-based works.
As autonomous driving systems evolve, the shift from standard vision-language models to vision-language-action (VLA) architectures marks a critical milestone by integrating "action" as a core modality. However, despite their potential, current VLA models are heavily bottlenecked by their reliance on massive dataset collection and expensive, dense reasoning annotations. This talk explores this multimodal evolution and presents NoRD, a novel, data-efficient VLA model that achieves competitive end-to-end driving performance without relying on reasoning overhead.
The conference continues tomorrow with speaker presentations in the Liberal Arts and Arts & Humanities Building.




