In 2018, Yelp switched from using the MVP architecture to the MVI architecture for Android development. Since then, adoption of our new MVI architecture library has risen and we’ve seen some great performance and scalability wins. In this blog post, we’ll cover why we switched to MVI in the first place, how we managed to get performant screens by default, and our take on unit testing MVI.

What is MVI?

One of the main reasons to use an architecture is to make things easier to test by separating concerns. For Android, this means keeping the Android SDK out of our presenters and abstracting away all the code that will cause issues for unit tests.

The general idea of Model View Intent (MVI) is that when the user interacts with the UI, a view event is sent to be processed in the model. The model can make network requests, manipulate some view state and send the state back to the view. They’re connected by an event bus or stream so no direct references to Android are required (thus concerns are separated for testing).

Why we switched away from MVP

Our MVP implementation did not scale well

Although Model-View-Presenter (MVP) is a great architecture with a lot of benefits, we found that it didn’t scale well for our larger, more complicated pages. Our presenters grew to have far too many lines of code and became unwieldy and awkward to maintain as we needed to add more state-management and create more complex presenter logic for MVP pages. It was possible to scale an MVP page using multiple presenters, but there was no one approach documented. Our MVP contracts also contained many duplicated interface methods.

We wanted free performance by default

When Google introduced the Android Vitals dashboard and announced that performance can affect our listing and promotability in the Play Store, Yelp’s Core Android team invested effort in improving our cold start timings, frame rendering timings, and frozen frames percentages. Although we made significant improvements in those areas, we found that performance regressions were easy to come by and our performance degraded again over time.

There are a few ways to prevent performance regressions: we could set up performance alerts, we could try to catch regressions before they’re merged, or we could also try to make our apps run smoothly by default. While we did try all of these in the end, our performance came to us for free through auto-mvi, our new MVI library.

Why we chose MVI and not MVVM

We evaluated both the MVI and the Model-View-ViewModel (MVVM) architectures before ultimately deciding on MVI. First, we looked at the basic requirements in our apps. Both of Yelp’s apps require a lot of scrolling and clicking in comparison to, for example, video streaming applications. Next, we looked at what other technologies we were using and determined which architecture would be most compatible with them.

We rely heavily on our in-house Bento library which is a wrapper around RecyclerView. In Bento, a Component is a part of the UI which can be slotted into any RecyclerView. We set up each Component to be its own mini MVP-universe that has its own view, model, and presenter.

In our prototypes, we found that combining Bento with the MVVM pattern was confusing and led to difficult to read code. However, MVI complimented Bento and allowed click events to be fired from within view holders without the need for direct references to the encompassing Fragment or Activity. Additionally, since some of our screens have a lot of UI elements, MVVM would require some data classes with many (greater than 30) fields which would not scale well.

How does auto-mvi work?

When the user interacts with the app, view events are emitted from the view (Fragment or Activity). A view event might be a click or scroll event. A presenter (note: to avoid confusion, at Yelp, we refer to the Model in MVI as the “presenter”) receives the events and sends back view states. The view then responds to these states and decides what to show accordingly. These view events and states are represented as sealed classes in Kotlin. They are emitted over an event bus which both the view and presenter can listen to for new events and states.

Scaling and readability with annotations

Both the presenter and view must handle all of these incoming states. Most Android MVI implementations accomplish this with a when statement in Kotlin. However, the when statement wouldn’t scale very well for Yelp. It would be difficult to read. Imagine the following but with fifty other is clauses:

private fun onViewEvent(viewEvent : MyFeatureEvents) {
  when (viewEvent) {
      is HeaderClicked -> onHeaderClick()
      is FooterClicked -> onFooterClick()
  }
}

To get around the when condition problem, the general idea was to route states and events to function references using a map. That meant going from the above code to:

private val functionMap = mapOf(
    HeaderClicked::class to ::onHeaderClick,
    FooterClicked::class to ::onFooterClick
)

private fun onViewEvent(viewEvent: MyFeatureEvents) {
   ((functionMap[viewEvent::class]) as KFunction0<Unit>).invoke()
}

Then all the onViewFunction() needs to do is look up the map.

So we could avoid the big when statement. Writing the function map is gross though and still defeats our scalability goal. We’d just be trading a large when statement for a large map. We would also need to handle the number of parameters the functions can have. The above code only covers the easiest, zero-parameter case.

This is how we arrived at the idea to annotate the functions instead. When the presenter and view are created, we use reflection (on a background thread) to create the map of states to functions. Our interface AutoFunction (which is where “auto” comes from) provides the mechanism for this and also routes incoming states and events to relevant functions, and then executes the function with reflection. Again, taking the following example:

private fun onViewEvent(viewEvent : MyFeatureEvents) {
  when (viewEvent) {
      is HeaderClicked -> onHeaderClick()
      is FooterClicked -> onFooterClick()
  }
}

Instead we have:

@Event(HeaderClicked::class)
fun onHeaderClick() {
  // do something
}

@Event(FooterClicked::class)
fun onFooterClick() {
  // make network request etc
}

With this approach, the scaling issue is solved. There is no when statement at all, no function map, and not even a specific function responsible for handling incoming events or states. It also has the advantage that it’s incredibly easy to read.

Scaling with sub presenters

One of the issues we found while using MVP was that for the most complex screens in Yelp’s consumer app, the presenters quickly grew difficult to maintain and understand. With this in mind, the auto-mvi library has a strategy for scaling presenters for complex screens such as this. A page will define one main presenter and within it there can be multiple sub presenters. A sub presenter can handle the logic for a particular feature or part of the UI. For example, for a page with these click events defined in the contract:

sealed class MyFeatureEvents : AutoMviViewEvent {
   object MyButton1Clicked : MyFeatureEvents()
   object MyButton2Clicked : MyFeatureEvents()
   object MyButton3Clicked : MyFeatureEvents()
}

We could respond to them all in one presenter like this:

class MyFeaturePresenter(
   eventBus: EventBusRx
) : AutoMviPresenter<MyFeatureEvents, MyFeatureStates>(eventBus) {

   @Event(MyButton1Clicked::class)
   fun onMyButton1Clicked() {
       // do something
   }

   @Event(MyButton2Clicked::class)
   fun onMyButton2Clicked() {
       // do something
   }

   @Event(MyButton3Clicked::class)
   fun onMyButton3Clicked() {
       // do something
   }
}

But with a sub presenter, we can handle a subset of events elsewhere:

class MyFeaturePresenter(
  eventBus: EventBusRx
) : AutoMviPresenter<MyFeatureEvents, MyFeatureStates>(eventBus) {

    // The rest of click events are handled in here
   @SubPresenter private val subPresenter = MyFeatureSubPresenter(eventBus)

   @Event(MyButton1Clicked::class)
   fun onMyButton1Clicked() {
        // do something
   }
}

Since everything is connected via an event bus, it’s simple for one sub presenter to handle a portion of the incoming view events and respond to the view. A bonus win of this pattern is that the organization of unit tests is much improved as each sub presenter can have its own separate unit test. This sub presenter pattern also helps put scaling code at the forefront of one’s mind during planning. If there is a clear division of logic, e.g. header logic vs footer logic, you can easily plan this from the beginning instead of waiting until the presenter is over a thousand lines long at some future point.

Performance for free

With auto-mvi using reflection to execute functions, an opportunity presented itself. The reflection call is straightforward:

myFunctionReference.invoke()

The function – like all the functions in our previous MVP presenters – executes on the main thread. However, by moving the execution of this one line to a background thread instead, we shifted a large portion of the total code that executes in the Yelp apps off the main thread leading to increased performance over all. This change only affects the presenters. The view code still runs on the main thread as it is required to.

The code executes on a single background thread to ensure that each unit of work is carried out sequentially. This means all the presenter code, be it performant or not – it’s all running on a background thread in the model now.

Testing

Writing unit tests for MVP presenters and views is easy and one of the greatest advantages it has over other architectures. We used Mockito to verify that functions were called on the interfaces that made up the MVP contract which is a seamless and straightforward way to test. For example;

fun whenButtonClicked_loadingProgressShown() {
       presenter.buttonClicked() // Simulated UI interaction
       Mockito.verify(view).showLoadingProgress()
}

In MVI, we wanted to make sure that the code was still easily testable. The approach we decided on was to record the events and states that are emitted over the event bus and make assertions on them.

To simplify testing, we created a JUnit test rule called PresenterRule. In addition to abstracting away most of the setup required for the presenter and event bus, the presenter rule also acts as an event bus recorder and provides a set of functions for asserting what happened.

Taking the example above, this looks like:

fun whenButtonClicked_loadingProgressShown() {
     presenterRule.sendEvent(ButtonClicked)
     presenterRule.assertEquals { listOf(ShowLoadingProgress) }
}

Along with verifying that functions are executed, this approach also provides a high-level look at what events and states were triggered and in what order. Lastly, developers can also assert that certain states were not triggered.

Reflecting 4 years later

Does it actually help scalability?

Many teams have made use of the sub presenter pattern with great results. In 2020, the Biz Mobile Foundation team rewrote Yelp’s Business Owner App’s home screen using auto-mvi and made great use of the sub presenter pattern. By utilizing sub presenters, this complicated page’s presenter size remained small and manageable, less than 200 lines with 8 sub presenters. There are also separate unit test classes for the sub presenters which are a lot more manageable than if all the tests were in one file.

Does it actually help performance?

From a high level, we can use Android Vitals to gauge our apps’ performance. However, auto-mvi is just one tool in Yelp’s performance arsenal. In combination with the Core Android team’s other performance efforts, Yelp’s consumer app’s frozen frame and rendering statistics on Google Play’s Android Vitals dashboard are significantly better than our competitors.

Looking at a more specific use case, in 2020, Yelp’s Growth team migrated the onboarding pages to auto-mvi and analyzed the frame rendering timings of the old flow vs the new MVI one, and found a > 50% improvement in the MVI version. This is precisely the kind of improvement we should expect as the presenter code isn’t clogging up the main thread anymore. Below outlines the speed gains we saw on this page with auto-mvi vs MVP.

Avg Frame Render Time Improvement (Relative) P90 Frame Render Time Improvement (Relative) Frozen Frame % Improvement (Absolute)
-51% -67% -3.99%

The performance boost resulted in an improvement in product metrics too, with a 6.32% relative lift for the Onboarding Flow Completion rate and an 8.26% relative lift for Signup Rate Completion.

Without any involved, special or overly scientific effort made for performance in particular here, the page’s performance improved. You might even say the performance was free.

Is unit testing still easy?

Most if not all of Yelp’s MVI presenters are accompanied by unit tests and the provided testing rule has proven to speed-up developer workflow. To date, we have thousands of unit tests making sure Yelp’s apps are doing what they’re supposed to do.

Conclusion

In summary, every architecture has its advantages and disadvantages but the most important thing is to choose the one that’s most suitable for your business needs. Auto-mvi has allowed Yelp to tackle the development of simple screens to complex screens and everything in between in a scalable and testable way while keeping runtime performance a feature and not an afterthought.

Acknowledgments

Thanks to Diego Waxemberg, Jason Liu, and all the feature teams at Yelp who provided invaluable feedback on our early prototypes and more importantly, adopted auto-mvi on their screens. On Core Android, shoutout to Kurt Bonatz, Matthew Page, Ying Chen for their contributions and help maintaining auto-mvi over the years. Many thanks to all the past members of Yelp who contributed ideas and feedback too.

Become an Engineer at Yelp

We work on a lot of cool projects at Yelp. If you're interested, apply!

View Job

Back to blog