VueNuxt.dev
Deep Dives

Prop Stability

Vue Refactor Walkthrough
Return to Insight

Let's dive into how I would approach the refactor of the newly prop stability insight logic. Let's assume we are dealing with legacy code with no prior tests, so writing tests for everything is simply out of the question (that's a task all on its own). In cases like this, I like to isolate the feature logic so that it can be tested with as few dependencies (coupling to the legacy code) as possible.

Let's begin

Intro

I usually start by going through all the changes and taking mental notes on some things that can be improved in our code. Then, I start by grouping or clustering things. This example is simple, so most of our feature logic is already colocated, which is a good start, but this might not always be the case.

const activeId = ref(0);

function isActive(id: number) {
  return id === toValue(activeId);
}

Moving down, we notice some coupling between our feature and the component. I wonder if this is the best place for the business logic of how to cycle through our activeIds.

function onClick() {
  activeId.value = (toValue(activeId) + 1) % stack.length;
}

Finally, in our template, we see more coupling, this time to the presentational layer. This coupling is not so bad, but I always minimize the logic we put into the template. In my opinion, the dumber our presentational components, the better.

Lastly, I also see room for improvement in naming the property we are adding: isActive.

 v-bind="{ ...tech, isActive: isActive(tech.id) }"

These mental notes already hint at where to focus our refactoring efforts. However, a bigger, more important issue is becoming visible.

How can we test our new logic? As things stand, independently testing our changes will be pretty tricky.

Okay, it's clear now. The scope of our refactor will be to focus on modularizing the new logic so that it can be tested without the rest of the feature. As a bonus, we can improve our code's readability and the legacy component reusability.


Tests

In Vue, the best way to isolate and modularize logic is through composables. By creating a new composable, we also have the perfect opportunity to use as much of the TDD test driven development approach as possible. This change minimizes our chances of introducing side effects in the future while documenting how our feature works through tests.

These are the minimal areas that our tests need to cover.

  1. State
  2. Mutating our state
  3. Logical condition
  4. Extending our data set with the logical condition

So, let's begin by translating our acceptance criteria into some empty test cases:

useActiveId.test.ts
describe('useActiveId', async () => {
  test('state: default case', () => {});
  test('state: initialized state', () => {});
  test('mutation: happy path', () => {});
  test('mutation: edge case', () => {});
  test('logicCondition: truth case', () => {});
  test('logicCondition: false case', () => {});
  test('Extending: Isolation case', () => {});
  test('Extending: Integration case', () => {});
});

Notice how I keep the titles for each test case pretty generic. I write just enough to link them to the acceptance criteria I have in mind.

This prevents the title from framing or influencing how I will test them.

As I work on each specific test case, I refine them with a more descriptive title based on how I design the tests. This approach greatly reduces the friction of getting started for me.

Next, we create mock data and define the constants driving our tests.

useActiveId.test.ts
const MOCK_DATA = [
  { id: 0, name: 'first' },
  { id: 1, name: 'second' },
  { id: 2, name: 'third' },
];

const INIT_ID = 3;
const FIRST_ID = MOCK_DATA[0].id;
const SECOND_ID = FIRST_ID + INCREMENT;
const THIRD_ID = SECOND_ID + INCREMENT;
const LAST_ID = MOCK_DATA[MOCK_DATA.length - 1].id;

Notice how the mocked data structure is driven by our constants.

Doing this will decrease the fragility of our tests while improving the readability of the assertions in our test cases.


State

The ref holding our state is activeId. We must remove this from the component and move it to the composable. We also need to expose this ref to mutate it from outside the composable. We need to introduce a constant to handle the init default value for our composable.

For tests, we only have to consider the following two cases:

  1. It starts with the correct value by default (default case).
  2. We can initialize the composable with a custom value (initialized case).

In code, our refactor looks like this:

export const DEFAULT_INIT = 0;

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  return {
    activeId,
  };
};

Mutation

Now that we have our state in the composable, we can move to the logic that drives the mutation. At the moment, this is coupled to the component's onClick handler which is not ideal for testing it independently.

function onClick() {
  activeId.value = (toValue(activeId) + 1) % stack.length;
}

Let's extract this logic and move it into our composable. I think having a visual reference of the mutation on the onClick handler would be beneficial for future code readers, though. I suggest creating a method that only returns the value we need instead of using a setter. This will allow us to reassign at the component level. We also need to add a new constant to be able to configure our increment value.

If the method handled the mutation directly, I would call it something like setActiveId. However, since we are returning the next ID value from our data set and handling an edge case where we cycle from the last item to the first, I believe nextActiveId is a better name. Lastly, to make it generic and reusable, we must allow it to accept the data set we want to use as an argument, making it a pure function.

Our mutation method would need to account for the two following cases:

  1. Every time we call this method, we increase the activeId value by the increment (happy path).
  2. If we are at the last item of our data set, the following ID returned would be the first one from our data set (edge case).
export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  return {
    activeId,
    nextActiveId,
  };
};

Awesome, things are already looking a lot better. Our decision to isolate the new feature with a composable is making it really easy for us to test things. Our design has also improved the naming of our code and the readability of our tests. So, let's keep going.


Logical condition

The next step is our logical condition, which is also at the component level. Yes, you guessed it, we need to move this into the composable as well. However, we can improve its naming because, in its new context, it might not fit as well anymore.

I like using names like isActive for variables that return a boolean, but is also conveys that we are dealing with a runtime state, which is no longer the case here because we are extracting this from the component. For this reason, I would rename this method to hasActiveId in our composable. This change communicates that the function matches the logical condition its name implies, regardless of whether it runs in a server or client context.

In terms of testing, we only need to cover the truthy and falsy cases:

  1. returns true when we pass an id that matches the activeId (true case)
  2. returns false when we pass an ID that does not match the activeId (false case)

No matter how simple the test might be (the ones above are pretty simple).

I would include them because, in my opinion, they document the developer's intent while writing the code and the functional requirements of the feature.

This testing strategy enables us to refactor with confidence at any time in the future.

export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
  };
};

The naming in our template loop is already more descriptive:

// from
v-bind="{ ...tech, isActive: isActive(tech.id) }"
// to
v-bind="{ ...tech, isActive: hasActive(tech.id) }"

Extending

The next step is a big one. We need to find a cleaner way to extend the data set to comply with the implementation details of our feature. My preferred approach here is a function that extends the schema of any data set.

This new function needs to:

  1. make this extension testable in isolation.
  2. minimize the coupling in the template.

With these two things in mind, I'm thinking of creating a new method that appends our feature property to any data set we pass into it. This would separate our data set from the single responsibility of our composable, which is to deal with the activeId. This functional approach allows us to test the function in isolation, thus making our composable highly reusable.

function withIsActive<T>(data: T[]) {
  return data.map((item) => {
    return {
      ...item,
      isActive: hasActiveId(item.id),
    };
  });
}

My idea is to receive any data set as an argument and map through it. We would spread any properties we take in first and then include our isActive property with the value from the output of the hasActiveId method for each item in our loop. Then, we would return the modified object from the method to the template.

For testing, we only need to consider two cases:

  1. That every item in the data set is extended with a property isActive with a value type of boolean (isolation case)
  2. That the isActive property is truthy for the current item as we iterate through all our data set (integration case)
export const DEFAULT_INIT = 0;
export const INCREMENT = 1

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  function withIsActive<T>(data: T[]) {
    return data.map((item) => {
      return {
        ...item,
        isActive: hasActiveId(item.id),
      };
    });
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
    withIsActive,
  };
};

Types

At this point in our refactor, we noticed some red squiggly lines in our IDE; Typescript is unhappy. The types for our composable are pretty straightforward. The only one that complicates things a bit is our withIsActive method. We made it generic because we did not want to fix it to any specific shape. The problem is that when we assign the value of isActive, we must pass the item.id into the hasActiveId method.

TS will complain because id does not exist in our generic type T.

So, let's not bang our heads against the type wall here. We need to constrain our type just enough to give helpful feedback to any consumer of our method about what might break its core functionality. This method would be useless if the data set we pass in has no property of id. A type like the one below would protect against this case and improve the devex with hints in case of trouble, which would be enough, in my view.

Types.ts
export type WithId<T> = {
  id: T & number;
  [x: string]: T;
};

This type defines the interface of an object with a key of id and a type from the intersection of our generic types T and number.

We need to use an intersection here; otherwise, TS will complain that the number (the type we expect from the id) cannot be assigned to the type of T.

This is our only requirement and, thus, our primary constraint, so we hard-coded it. We then open the type up to take in any key with a type of string and a generic type of T. This way, we can customize T from the outside, which is better than fixing it to a fallback of unknown.

Now we can fix the squiggly lines in our tests:

Test.ts
test('withIsActive: Isolation case', () => {
  const { withIsActive } = useActiveId(0);
  withIsActive<string | number>(MOCK_DATA).forEach((item) => {
    expect(item).toHaveProperty('isActive');
    expect(item.isActive).toBeTypeOf('boolean');
  });
});
test('withIsActive: Integration case', () => {
  const { activeId, nextActiveId, withIsActive } = useActiveId(0);

  MOCK_DATA.map((item) => item.id).forEach((id) => {
    expect(withIsActive<string | number>(MOCK_DATA)[id].isActive).toBeTruthy();
    activeId.value = nextActiveId(MOCK_DATA);
  });
});

Now, we are defining the generic of withIsActive with string | number. This fact tells TS that our data set is an array of objects with at least an id of type number and any other key of type string with a value of number or string.


Encapsulation

We have worked hard to make our List and ListItem components clean, performant, and reusable, but we can still do one last thing to improve them. There might be a lot of configuration to consider in a more complex example. Also, if we use this component in multiple places of our app, you can see how keeping all of them in sync can quickly lead to headaches or repetitive work. We can improve all this by encapsulating how we use this reusable component and all its configuration as required by our feature into a configured component, which is another best practice I like.

It starts by framing our reusable component from the perspective of our feature, which is usually also the perspective of the user.

The user does not care that we have a RefactoredList or StableList; they see and care about the TechStackList.

So, we begin by creating a new component with this name and removing all coupling of our data set in the RefactoredList component.

<script setup lang="ts">
const stack = [
  { id: 0, name: 'vue' },
  { id: 1, name: 'nuxt' },
  { id: 2, name: 'pinia' },
  { id: 3, name: 'tailwind' },
  { id: 4, name: 'typescript' },
  { id: 5, name: 'vitest' },
];
</script>

<template>
  <RefactoredList :data="stack" />
</template>

Great! This gives us a better place for our data set (if we had to fetch data, this would make even more sense). Best of all, our reusable component is finally truly generic. Now, it is also easy to reuse our feature component in multiple parts of the app.

At first glance, this wrapper component might not seem necessary. With this simple example, I would agree with you. However, one of the reasons I like this best practice is because it allows us to have a nice package for our feature while also setting the boundaries for our testing strategy:

  • The reusable component is best visually documented by Storybook and can show generic data.
  • The configured component can be functionally documented by component tests in Cypress to make assertions of our feature data set in the dom.

If you consider this, it's well worth the effort to do it.


Final code

We did it! Our code is much cleaner now, and our tests give us confidence moving forward. Thank you for sticking with me through the whole process. I know it wasn't quick, but we should be proud of our accomplishments. Take it all in; this is the final state of our feature code:

import { ref, toValue } from 'vue';
import type { WithId } from './useActiveId.types';

export const DEFAULT_INIT = 0;
export const INCREMENT = 1;

export const useActiveId = (initActiveId: number = DEFAULT_INIT) => {
  const activeId = ref(initActiveId);

  function nextActiveId<T>(data: T[]): number {
    return (toValue(activeId) + INCREMENT) % data.length;
  }

  function hasActiveId(id: number): boolean {
    return id === toValue(activeId);
  }

  function withIsActive<T>(data: WithId<T>[]) {
    return data.map((item) => {
      return {
        ...item,
        isActive: hasActiveId(item.id),
      };
    });
  }

  return {
    activeId,
    nextActiveId,
    hasActiveId,
    withIsActive,
  };
};
Open Playground

In the playground's terminal you can run ctl+c and npm run test to run the tests with vitest.


Return to Insight

Copyright © 2025