The Quest Engine

Building a Modular Objective System for Rapala Fishing: World Tour

The Problem


Early in Rapala Fishing development, it was obvious players had no clear objectives. The publisher's brief called for campaigns, daily missions, weekly challenges, a battle pass, seasonal quests — multiple distinct features, each with different rules, timelines, and reward structures.


The instinct in most productions — and what my producer told me to do the first time — is to design each of these separately: a campaign system, a daily system, a battle pass system. Each feature gets its own logic, its own implementation, its own doc. That approach works — until you need to add something new, or hand the work to someone else.


I was about to bring in junior designers. I knew more content types would be requested over time. I needed a system they could operate without me — and that could grow without requiring new engineering work every time.

The Insight


Every quest, regardless of type, does the same three things: it presents a goal, it tracks progress toward that goal, and it delivers a reward when the goal is met. Campaign quests do this. Daily quests do this. Battle pass quests do this.


The differences between them aren't in the quest itself — they're in how quests are grouped, ordered, and surfaced to the player.


That meant the right architecture wasn't a quest system. It was a quest engine: a single backbone that any feature in the game could plug into.

Not actual image reference - although similar enough to what may be present in the game nowadays. This is a model to illustrate the article. Actual used materials are confidential and protected under NDA.

The Architecture…


The system is built in three layers.

Quests are the atomic unit. A quest has a goal (catch 5 Spotted Bass), a reward, and optional elements — a title, a description, a time constraint. Goals are boolean: complete or not. Progress is tracked. Rewards are claimed through a conscious player action, not delivered automatically. This distinction matters — it preserves player agency and creates a moment of engagement at completion.


Quests have states: Unavailable, Ongoing, Completed, Done, Failed, On Hold. Critically, the Failed state is never visible to the player. A timed quest that expires silently disappears rather than being flagged as a failure — a deliberate choice to protect player motivation.


Quest Lists are containers. A list defines how many quests a player can hold simultaneously, whether quests are ordered or random, whether they repeat, and whether completing the list triggers a reward. Lists handle the behavioral logic of a quest group — the quest itself only handles the goal.


This separation is what makes the system composable. A daily list uses random, repeatable quests with a 24-hour time constraint. A campaign list uses ordered, one-time quests with no time constraint. The quest design is the same. The list handles the difference.


Quest Lines group lists into progressions. They're what makes a campaign feel like a campaign — an ordered sequence of lists that can carry narrative, unlock new content, and deliver macro rewards at completion. Quest Lines can nest other Quest Lines, enabling branching structures when needed.


Above all of this sits the Journal — the feature-level container that a specific part of the game owns. The Daily Journal owns its quest line. The Campaign Journal owns its quest line. Each Journal is independent, but all of them run on the same underlying system.

… in Practice


Any new quest type in the game required no new engineering. A new Journal, a new Quest Line, a new set of Quest Lists — all configurable in a spreadsheet, readable by the backend, deployable without a code release.


The triggers system extended this further. Quests could fire on almost any event in the game: unlocking a destination, reaching a level, completing another quest, opening the game after X days of inactivity, buying from the store. Anything trackable in the game could become a quest trigger — and new trigger types could be added without touching the quest architecture itself.


At soft launch, the system supported 600+ active tasks in production, primarily from daily and weekly quest configurations. The real test of an architecture isn't how it holds up at launch — it's whether it holds up as content scales without engineering having to come back to it. In this case, it did. Content volume grew independently of system complexity — without requiring additional engineering support or system refactoring — which is where these implementations tend to break down.

Where Design and Production parted ways


Two gaps opened between the architecture as designed and the system as shipped. Neither was a design failure — both were production decisions with real costs.


UX adoption. The engine was designed for composability — a single system that any feature could plug into. That abstraction made sense to engineers and to other systems designers, but the UX team worked feature-first: each feature they designed came with its own mental model of how objectives should behave. A shared backbone wasn't the frame they were operating in.


The friction this created was real. I ran alignment sessions and eventually got UX to a shared understanding of the system — but the misalignment lasted long enough to slow adoption and create rework downstream. The lesson wasn't that the architecture was wrong. It was that an abstract system, however internally coherent, needs to be translated into the language of each discipline that's expected to build on top of it. A one-pager framing the engine through UX concerns — screens, flows, player moments — would have closed that gap faster than documentation aimed at engineers.


Dynamic difficulty. The design specified that random quest selection would be weighted by player behavior and progression state — giving newer players simpler quests and veteran players harder ones. This was scoped, specced, and agreed on. It was deprioritized during engineering planning, flagged as a phase-two feature, and never recovered: the project transitioned to Reliance before the implementation window reopened.


The cost was structural. Without dynamic selection, every difficulty variation had to be pre-authored manually. Instead of the system adapting to the player, the design team had to cover every possible player state through content volume — slowing iteration, increasing the surface area for inconsistencies, and adding maintenance overhead that compounded as the quest count grew. That's a meaningful part of why the system reached 600+ tasks. Not content ambition alone, but design overhead generated by an incomplete implementation.


The system worked. It just worked at a higher cost than the architecture was supposed to require.

Not actual spreadhseet used for implementation. This is a model to illustrate the article. Actual used implementation materials are confidential and protected under NDA.

What I would do Differently


Two things, with different root causes.

On UX alignment: kickoffs now include discipline leads from the start — UX, engineering, and production in the same room before the spec is written. Systems documentation gets a companion artifact for each discipline, framed in their language rather than the system's internal logic.


On dynamic difficulty: I'd make it a launch-blocking dependency and anchor it to content production cost during planning. "Without this, we need X more manually authored quest variants" lands in engineering prioritization in a way that design rationale alone doesn't. Features that get scoped as phase-two rarely come back.

What Shipped


The Quest System scaled to 600+ active tasks in production across daily and weekly content. Systems that reach this content volume typically require structural revision to stay stable. This one didn't — no rework, no emergency patches, no feature-specific forks. It handled every quest type that shipped, and was built to handle types that never made it — campaign quests and monthly challenges eventually cut for scope, not for system limitations.


Junior designers were able to create and configure new quest content without touching the underlying system. The engine did its job: it separated the design of content from the implementation of the system that delivered it. That separation is what let the content volume scale without scaling the engineering cost.

Rapala Fishing (formerly: Rapala Fishing: World Tour) — Lead Game Designer [de facto Game Director] (Oktagon/Fortis Games) · 2021–2023