top of page
Search
  • Writer's pictureJeff Bacidore

POV Algorithms: Taken to the Limit

In a previous post, we discussed how a VWAP algorithm can morph into a Percent of Volume (POV) algorithm when the volume limit is hit. In this post, we extend on this notion of how constraints meant to help prevent unwanted behavior actually wind up doing odd – and often very costly – things when those constraints are hit. Specifically, some POV algorithms and related participation rate-based algorithms (e.g., some implementation shortfall algorithms) use a notion of upper and lower bound rates, where the realized participation rate is required to stay within a specified bound, e.g., between 19% and 21%. Intuitively, the benefit of using a range instead of a fixed bound is that it prevents the algorithm from needlessly tracking the desired target rate too rigidly, resulting in excessive use of aggressive marketable orders.


For example, if a trader sets the target participation rate at a strict POV of 15% without any flexibility, the algorithm will immediately begin sending marketable orders to stay at that 15% target rate once the algorithm falls behind. Had the algorithm allowed a range of values, say, like some customized flavors of POV do (e.g., 10 to 20%), the algorithm wouldn’t immediately start sending liquidity-demanding orders. Instead it would only do so if it fell behind by a larger amount, in this case a rate of 10%. Of course, when it hits 10%, it begins sending marketable orders to remain in the 10 to 20% range. But presumably this is better than doing this at the target 15% rate since there is some chance future passive fills will push you back up near the 15% target.


But this sort of begs the question – what determines the rate the algorithm can trade at “passively”, without having to send marketable orders? For a given limit order strategy, this rate is determined by the incoming marketable flow. For example, if the algorithm put every order on Exchange XYZ, its ability to participate hinges solely on how much volume is done on Exchange XYZ and how likely the marketable flow is to interact with that order. If relatively little flow goes to Exchange XYZ and/or the limit order has relatively low priority (maybe because its price is too passive or the prevailing queue was long at order arrival), the algorithm will fall behind its target rate and wind up sending marketable orders to stay on schedule.


Of course, this Exchange XYZ-only strategy is a stupid and unrealistic strategy. Brokers certainly optimize their algorithms and routers to maximize passive trading. But even the best strategy ever designed will have some maximum, and this maximum will likely be relatively low given that the algorithm often cannot place orders everywhere without risking getting too far ahead of the desired rate (or overfilling). And even then, it can’t simply cut to the front of every queue. Therefore, as the target rate increases above some level, even the best broker algorithm is going to fall behind schedule and use aggressive trading to track the target rate. Put differently, setting a high participation rate but somehow expecting a broker algorithm to execute entirely passively is no different from how I used to walk home from the subway each night complaining to my mom on my flip phone about how I had yet to be “discovered” and that it was just because all the NYC modeling agencies just didn’t know what true talent was. But I digress.


But what about giving the algorithm some slack before it gets aggressive? Surely that would solve the problem, right? Unfortunately, this isn’t always the case. In fact, not only does it not eliminate the risk, it may create even bigger problems. For example, suppose you set a target rate of 25%, but allow the algorithm to vary between 20% and 30%. Of course, all else equal, this is less likely to send marketable orders than if you followed the 25% rate strictly. But this doesn’t come without costs. While you may be less likely to hit the constraint, if that lower bound rate is also very high, you will still hit the bound and start sending marketable orders. You will just do so later in the order and may also have a lower overall fill rate if you never catch back up to the desired 25% rate fully. But it can be even worse depending on how the algorithm is implemented.


Specifically, some implementations use the 20% lower bound as a trigger to “catch up” to the target rate. When the algorithm falls below 20%, it sends marketable orders not just to stay above 20%, but rather it will send marketable orders to get all the way back up to the target rate of 25%. That may sound reasonable. But there are two problems with this. First, this is typically done immediately, which can be impactful. The amount of trading to catch up to 25% will be larger than to simply get back above 20%. But as importantly, the aggregate size of those catch up orders can be substantial if the order was trading below the target rate for a long period of time and/or if the order is relatively large.


To see this, let’s continue the example. Suppose the order has been trading above the 20% lower bound for the first 10 minutes of the order, but then eventually breaches it. The algorithm needs to trade an additional 5% of the past volume to get back up to the 25% target rate. Had market volume been 200 shares over the past 10 minutes, the algorithm would only need to send around 10 shares to get close to the 25% target.[1] But suppose the algorithm had traded 100,000 shares over that period, or even more. Do you really want the algorithm to trade 5,000+ shares immediately to get back up to the target rate? And this problem becomes increasingly severe the larger the slack is and/or the longer the order trades above the lower bound. So, while having a lot of slack decreases the probability the lower boundary will be hit, the algorithm really goes to town when the lower bound is hit. Not really what we wanted, was it?


The bottom line to all of this is that there is no way to avoid the inevitable. When you try to trade a large percent of volume using a POV algorithm, specifically one that is above the “natural” rate at which you could reasonably expect to trade primarily passively, you should expect to pay in the form of the larger impact caused by marketable orders. And more generally, as this and our previous VWAP post has shown, sometimes constraints meant to avoid bad outcomes can potentially cause even worse outcomes. The key is understanding how the algorithm will behave when these constraints are hit and doing so before you learn the hard way. Perhaps had I asked more questions before I moved to NYC, I would have realized that I would never grace the cover of Men’s Vogue. (And not just because that magazine didn’t even exist).

[1] Technically, you would need to trade more than this because your own trading adds to volume, making you have to trade even more due to your own volume.

The author is the Founder and President of The Bacidore Group, LLC and author of the new book Algorithmic Trading: A Practitioner's Guide. For more information on how the Bacidore Group can help improve trading performance as well as measure that performance, please feel free to contact us at info@bacidore.com or via our webpage www.bacidore.com.




Please check out our new book Algorithmic Trading: A Practitioner's Guide, available on now on Amazon. Click Here for more details.


For an overview of our other services, please click HERE.


And please check out our other blog posts available HERE.


And if you'd like to receive notification of new posts, please join our mailing list below.








Copyright 2021, The Bacidore Group, LLC. All Rights Reserved.


bottom of page