portfolio image

Modernizing Mailjet’s Email Editor

How it became the most fully featured editor across the Sinch ecosystem

How it became the most fully featured editor across the Sinch ecosystem

How it became the most fully featured editor across the Sinch ecosystem

Overview

Overview

Overview

Mailjet’s drag-and-drop email editor sits at the center of the product experience and directly influences onboarding, activation, and retention. While Mailjet’s sending infrastructure remained powerful, research revealed that the editor itself had become a growing source of user friction—not due to missing features, but because its rules, scope, and system feedback were often implicit or hard to trust.

Signals across support tickets, 9,300+ NPS verbatims, usability testing, and competitive benchmarking consistently pointed to breakdowns in clarity and confidence. Users struggled to understand where controls lived, how styling was scoped, what composite blocks allowed, and whether actions like testing or saving had actually succeeded.

At the same time, the market has largely commoditized around similar email editor UI patterns, limiting differentiation through visual design alone. This created a strategic opportunity to compete on brand governance, editing confidence, collaboration, testing visibility, and contextual AI.

This work focused on modernizing the editor as a foundational system, aligning it with user mental models, making scope and overrides explicit, strengthening drag-and-drop affordances, and embedding trust surfaces directly into the workflow. Validation confirmed that reducing uncertainty—not adding power—was the primary lever for improving usability and confidence.

Timeline:

Research and validation ran from Oct–Dec 2025, followed by design partnership, feasibility alignment, and engineering handoff beginning in Jan 2026.

Role:

Lead UX Researcher · Design Partner

My Role

My Role

My Role

I led research from discovery through synthesis and partnered closely with product design to translate insights into validated design decisions. My responsibilities included:

  • Research strategy and planning

  • Unmoderated card sorting

  • Moderated and unmoderated usability testing (wireframes and prototypes)

  • Competitive benchmarking

  • Cross-signal synthesis and IA recommendations

  • Microcopy strategy

  • Research-informed design guidance and feasibility alignment

  • Engineering handoff support and documentation

Research Objectives

Research Objectives

Research Objectives

We sought to understand:

  • How users mentally model blocks and styling

  • Where styling scope becomes ambiguous

  • How composite blocks affect speed vs. comprehension

  • Which interactions create cognitive friction

  • Whether a tabbed Content | Style | Settings model reduces guesswork

Methodology

Methodology

Methodology

Unmoderated Card Sorting

Two exercises over two weeks evaluated how users grouped 30 content blocks and 20 settings.

Recruitment constraints capped the sample at 12 participants. While larger samples are ideal for statistical clustering, grouping patterns repeated consistently across participants and locales, providing strong directional confidence.

Competitive Benchmarking (12 Tools)

Competitive analysis revealed consistent market gaps in:

  • Brand enforcement clarity

  • Collaboration ergonomics

  • Testing and validation visibility

  • Embedded AI guidance

Rather than reinventing editor layouts, this reinforced an opportunity to differentiate through brand-safe flexibility, inline trust tooling, and contextual AI co-piloting.

NPS + Verbatim Synthesis (9,300+ Comments)

Recurring friction themes surfaced across customer feedback:

  • No autosave or versioning

  • UI instability

  • Rigid template behaviors

  • Poor error messaging

These insights directly informed requirements for:

  • Draft history and recovery

  • Inline warnings and system feedback

  • Flexible block reuse

  • Clear override visibility

Insights were triangulated across verbatims, support trends, and usability observations.

Usability Testing (Milestone 3 Validation)

Usability Testing (Milestone 3 Validation)

Usability Testing (Milestone 3 Validation)

Unmoderated, think-aloud usability tests were conducted using an interactive Figma Make prototype to validate IA, mental-model alignment, and trust-related interactions.

Task Success Highlights

Preview before sending (90% success);

Adjust section background (80% success);

Add subject line (80% success);

Use the AI assistant (80% success);

Reorganize content (70% success);

Change text font (50% success).

  • Self-reported difficulty: 3.2 / 5 (generally approachable)

  • Friction most often occurred during mid-flow editing and system feedback moments, not initial discoverability.

  • When frustration appeared, it was typically tied to unclear system responses or prototype limitations, rather than confusion about what to do next.

Validation focused on confidence and predictability, not just task completion.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

Key Findings & Synthesis

Key Findings & Synthesis

Key Findings & Synthesis

Users Organize by Mental Model, Not Feature Taxonomy

Users grouped blocks by layout, function, or frequency—rarely by system structure.

Implication:
The editor must support multiple mental models through labeling, previews, and contextual cues rather than enforcing a single taxonomy.

Styling Friction Is Driven by Invisible Scope

Users were less overwhelmed by styling options than by uncertainty around where changes applied.

Implication:
Styling systems must make scope visible, reversible, and safe.

Composite Blocks Trade Speed for Confidence

Pre-built blocks accelerated creation but introduced hesitation when editability was unclear.

Implication:
Speed only feels fast when users can predict outcomes.

Mid-Flow Editing Is the Primary Breakdown Point

Reorganizing content and editing in place caused the most friction, especially without strong intermediate feedback.

Implication:
Ghost states, hover previews, and inline feedback are essential for maintaining momentum.

Trust Is a UX Surface

Autosave, testing visibility, error handling, and recovery were perceived as business-critical—not advanced features.

Implication:
Trust must be embedded directly into the editing experience.

Differentiation Comes from Governance, Not Novelty

With market parity in editor layouts, competitive advantage lies in guidance, safety, and extensibility—not visual reinvention.

Design Decisions Informed by Research

Design Decisions Informed by Research

Design Decisions Informed by Research

Tabbed Content | Style | Settings panels aligned to user mental models

  • Tiered block library (Standard vs. Pre-built) to balance speed and control

  • Scoped microcopy clarifying styling inheritance

  • Override indicators to make system behavior explicit

  • Hover previews and ghost blocks to improve drag-and-drop confidence

  • Improved visibility of testing history and post-action system feedback

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

👉 Interactive prototype tested with users during Milestone 3 validation.

Status

Status

Status

This redesign is actively in engineering handoff.

  • Hi-fi prototypes in feasibility review

  • Component audit underway

  • Microcopy and override patterns being productized

Parallel exploration includes:

  • Version history and rollback

  • Comment pins and approvals

All decisions shown reflect research and feasibility alignment as of Q4 2025.

Why This Work Is Strategic

Why This Work Is Strategic

  • Grounded in mental-model excavation

  • Differentiates where competitors underperform

  • Improves perceived reliability through trust UI

  • Extensible across the omnichannel Passport redesign

  • Directly addresses churn drivers surfaced in customer feedback

👉 Interactive prototype tested with users during Milestone 3 validation.

Next Steps

Next Steps

Next Steps

High-fidelity validation will focus on:

  • Scoped styling inheritance

  • Drag-and-drop placement confidence

  • Composite block editing flows

  • Trust panel comprehension

  • Version history and rollback behaviors

Success will be measured by reduced placement errors, improved override comprehension, and organic discovery of collaboration touchpoints—ensuring users feel confident, not cautious, while editing.

What’s Next

What’s Next

What’s Next

I’m transitioning into a growth-focused UX role across the broader Sinch ecosystem—expanding this work to explore how onboarding, editor experience, embedded AI, platform governance, and cross-product journeys can improve activation, retention, and expansion beyond Mailjet.

Evidence Note

Evidence is drawn from internal usability testing, NPS analysis, and competitive research conducted at Mailjet. Artifacts shown are anonymized.

Enter password to view case study