GSAP (GreenSock Animation Platform)
A wildly robust JavaScript animation library built for professionals
GSAP is a high-performance JavaScript animation library designed for creating precise and expressive motion on the web. It goes beyond simple visual effects, offering fine-grained control over timing, easing, and sequencing—making it ideal for interaction design where motion needs to respond naturally to user input and system states.
What sets GSAP apart is its reliability and flexibility in complex scenarios such as scroll-based interactions, chained animations, and dynamic UI feedback. With tools like timelines, designers and developers can structure motion as a clear, intentional flow, ensuring animations feel consistent, purposeful, and deeply integrated into the overall user experience.
How to use
Load the script with a script tag (a CDN link is available)
<script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.12.2/gsap.min.js"></script>
Or, run 'npm install matter-js' and import the module
To add effect to type, svg or anything you want, follow the basic grammar at the below.
gsap.from("#titleName", {
delay: 0.5,
y: 20,
opacity: 1,
ease: "expo.inOut"
});
// gsap.from is used to animate an element from a specified starting state to its current state.
gsap.to("#titleName", {
delay: 0.5,
y: 20,
opacity: 1,
ease: "expo.inOut"
});
// gsap.to is used to animate an element from its current state to a specified end state.
The expressive motion scheme overshoots the final values to add bounce
Expressive is Material’s opinionated motion scheme, and should be used for most situations, particularly hero moments and key interactions.
The standard motion scheme eases into the final values
Expressive is Material’s opinionated motion scheme, and should be used for most situations, particularly hero moments and key interactions.
Agentic UX and human-agent ecosystems
With 88% of business leaders planning to increase AI budgets for agentic capabilities, AI agents are becoming a strategic priority. This coincides with growing user acceptance and knowledge of where, how, and when these agents are actually useful.
The result is consolidation. Instead of managing dozens of granular agents, master agents will orchestrate specialized agents automatically based on task type, context, and importance. Now, designers must create experiences for these human-agent ecosystems, managing agent lifecycles, orchestrating handoffs, and determining when humans step in.
Dynamic interfaces generated on demand
Large language models like Gemini 3 Pro can now build interactive, tailored interfaces in real-time for each prompt. Research shows these interfaces matched human expert-designed work 44% of the time, which is remarkable given they are generated in seconds.
Instead of handing off fixed screens, designers will need to create constraints, safety rails, and evaluation criteria that guide how these model-driven interfaces behave.
Voice interfaces find their place
Voice has moved past the hype phase: in the US alone, 157.1 million people are expected to use voice assistants by the end of 2026. Design leaders can expect a rise in context-aware, multimodal experiences. These interfaces fluidly combine voice, touch, and visuals based on users' actions. Good UX design now considers when people’s hands are busy or their environment makes typing impractical.
Micro-interactions become the language
Remember when those little animations and hover effects were extras designers added at the end of a project, like icing on the cake? A button changing color, a progress bar filling up, a small celebration when you complete a task... we treated them like decorative touches.
These micro-interactions have become the way interfaces communicate with users, acknowledging actions without forcing people to stop and read confirmation messages. An interface without them feels lifeless, like talking to someone who never looks up while you're speaking.
AR moves from demo to daily use
I've sat through dozens of impressive AR demos at conferences over the years, seen some cool stuff, and then gone back to my office thinking "when would I actually use this in real work?”
That gap between impressive demo and daily tool is starting to close. Retail companies are letting you see furniture in your living room before you buy it, and design teams are walking through 3D models in the real spaces where those designs will live.
AR starts making sense when it solves a real problem better than looking at flat images on a screen, and more companies are finding those problems worth solving.
Personalization without crossing the line
Users want personalized experiences until they realize what the system needs to know to deliver them. Predicting what you need before you ask sounds helpful, but it requires constant behavior tracking to work.
Take Netflix's recommendation algorithm. It works well because users opted into a service built around personalization. Compare that to a retail website that tracks your browsing across dozens of unrelated sites to show you ads for products you looked at once. Same technology, completely different user perception of whether it's helpful or invasive.
The best enterprise software gives users control over how much the system adapts to them and makes privacy settings visible. 2026 may be the year companies figure out how to adapt to user preferences without invasive tracking.
Accessibility gets built in, not added on
Accessibility can't be an afterthought or a compliance checkbox anymore. In one of my latest Forbes articles, I discussed how accessible design can drive ROI and become a genuine competitive advantage.
When you design for cognitive differences from the start, everyone benefits. For instance, motion-sensitivity toggles help people with ADHD, and those who just get dizzy from animations.
I expect accessibility to become one of the defining topics in UX design over the coming years, given its potential to improve how we build products.
Cross-platform UX that works
Users constantly switch between their phones, tablets, laptops, and smartwatches. They commute to work, spend hours at the office, take lunch breaks, head home, and expect their experience to follow them.
True cross-platform UX means your work follows you wherever you go. Start a task on mobile, continue on desktop, and finish on tablet without thinking about it. We'll see this becoming a much bigger part of business conversations in 2026.