Back to Articles
Decisions Lab

How a QS Top 32 University Pre-Tested Messaging Ahead of a 38% Rise in Applications

How a QS Top 32 University Pre-Tested Messaging Ahead of a 38% Rise in Applications

For programs with one main launch window, weak messaging is expensive. HKSEC, a venture competition run by The Chinese University of Hong Kong, used Decisions Lab to test different campaign directions before launch. In the following cycle, applications rose 38%, from 133 to 184. CUHK is ranked #32 globally in the QS World University Rankings 2026, and HKSEC has engaged more than 11,700 students from 233 academic institutions since 2007.

Success Metrics

  • 38% increase in applications
  • 184 applications vs 133 the previous cycle
  • 5 simulation runs across modeled participants

For HKSEC, messaging was not a branding detail. It shaped whether students understood the competition, felt aligned with it, and decided to apply. Because the program runs on a fixed annual cycle, there is limited room to discover weak messaging after launch. If the positioning misses, the whole intake suffers.

The invisible gap in launch decisions

Before this pilot, the decision looked like what most teams do: review past cohorts, consider trends, discuss possible directions, and choose the one that feels strongest internally. That process can produce ideas. It cannot reliably show how the audience will actually read them.

A message can sound ambitious internally and still feel generic to applicants. It can seem inspiring in a meeting and still fail to connect with the motivations that drive action. Most teams only learn that after launch, when budget and time have already been spent.

Pre-testing what works before launch

In this pilot exercise, Decisions Lab used HKSEC-provided data together with publicly available information on prior participants to build digital twins of the target audience. The team then ran five simulations and averaged the results across runs to reduce noise and identify the most stable signal.

The simulation tested three simple questions: which direction resonated most, which best aligned with how participants saw social enterprise and its challenges, and which best fit their values and expectations. The goal was not just to find the option with the loudest reaction. It was to find the direction with the strongest overall fit.

The impact: clearer direction before launch

In the following cycle, applications increased from 133 to 184, a 38% rise.
That should not be overstated as proof that messaging alone caused the increase. Launch performance also depends on execution, timing, and outreach. But it does show why pre-validating messaging matters. When there is one main launch window, the message is not decoration. It is part of the conversion path.

The real value of the pilot was that HKSEC did not have to rely only on internal judgment. It had a structured way to test how likely applicants were likely to interpret the campaign before going live. Tricia also explicitly asked that the case be described publicly as a pilot exercise, not as an official collaboration, and that the draft themes not be disclosed.

Why traditional feedback falls short

Most teams still choose messaging through some mix of internal debate, scattered comments, and simple preference voting. Each of those breaks in predictable ways. Internal teams are too close to the work. Feedback is anecdotal. Voting shows what people picked, but not why it works.
That issue showed up during the pilot itself. In the follow-up email chain, one spreadsheet appeared to suggest a different outcome from the summary report. The clarification was that the spreadsheet reflected only one simulation run, while the final recommendation came from the aggregated pattern across all five runs and questions.

That is the difference between counting reactions and modeling reception.

A spreadsheet can show what won in one slice.
A simulation can show which direction is more robust overall.

Why businesses should care

This is not just a university case. It is the same problem businesses face when launching a campaign, refining positioning, or deciding which narrative to put into market.

When the message is wrong, the loss is usually quiet. Conversion slips. Urgency drops. The offer becomes harder to understand. By the time the data makes that obvious, budget has already been spent.

Audience simulation gives teams a way to test message-market fit earlier, before a weak narrative drags down the whole cycle.


The shift from guessing to pre-validation

HKSEC did not need more brainstorming. In our pilot exercise, it needed a better way to choose between plausible directions before a fixed annual launch.
That is the broader lesson in this case. Decisions Lab helps teams simulate how their audience is likely to interpret a message before the market makes the decision for them.