Russia Growth Diagnostics (7): Market Structure and Competition

< Part 6: Taxation and Laws

Thirty years ago the entire Russian economy was managed by the state. Some estimates of the current state presence reach 50% of GDP, and this makes the current market structure a likely source of inefficiencies. Let’s see if it is.

Industrial Organization

Over the last 15 years, Russia transformed several state-owned industries according to a simple rule: take an industry and separate it into a natural monopoly and firms that will compete with each other. After the separation, the state retains control over the natural monopoly and privatizes the rest. This was done for electricity, railroads, utilities, and oil. Maybe not so competitive in details, newly created firms are separate business entities and have to think about profit.

The state retained control over certain industries. The Russian government is a monopolist in banking and gas production, although this doesn’t imply the textbook behavior with prices above and supply below competitive equilibrium. I already discussed banking in the post on finance and just restate here that the industry is open for new competitors, including foreign banks.

The third interesting part is the industries where the state consolidated assets, instead of privatizing them. The newly created “state corporations” manage state assets in the defense industry, nuclear energy, and hi-tech. They don’t have many competitors in these markets.

Regional economies may lack local competition. This includes company towns with a single major employer, former state enterprises that have been privatized by geographical clusters, and other peculiarities of an economy in transition.

Government procurement is the last suspect for distorted competition. Technically, it should be the most efficient market, where each pen is bought from an auction. In practice, this system involves collusions and incentives to spend more. Still, it’s difficult to separate trivial corruption from attempts to overcome formalism. Though the price isn’t the only criterion in procurement auctions, ex ante requirements to the bidders are incomplete and leave space for price dumping by incompetent firms. Is this procurement system a constraint in general? Yes, in the sense that private firms would be better at picking suppliers.

I haven’t discussed privately owned sectors, like retail and cellular, but they would benefit from the same things the state sector would. Namely, an independent and powerful agency that protects competition on the ongoing basis, not just at the moment when the state privatizes assets. Secondly, government price setting, including price ceilings for natural monopolies, is based on some arbitrary factors, rather than economic principles. Of course, government uses some ad-hoc models, but until these models remain secret, government pricing casts doubts.

Industrial Policy

By “industrial policy” I mean incentives for reallocating resources to productive industries. This smells anti-market sentiments, but it isn’t. Many markets have already been distorted (not just by government), and these distortions are costly. Hsieh and Klenow (2011) estimate a 50% productivity gain for China and India if these countries moved resources into more productive industries.

An overview of Russian productivity growth from Kaitila (2015):

screenshot

The total factor productivity (TFP) depends on the economy-wide factors and allocation of inputs across firms. The economy-wide factors are all those things I discussed in the previous posts. Here I take on the distribution of inputs, mostly physical capital and labor.

McKinsey (2009) review productivity in Russia from a business perspective. Academic sources are listed in the table above, and I also recommend Bessonova (2007, in Russian). Most sources reach a similar conclusion: high dispersion of productivity within industries, which means that unproductive firms survive. In a hypothetical market economy, unproductive firms don’t live long before they lose capital and employees. Something keeps them afloat in Russia, and it retards TFP growth.

Variety, Complexity, and Innovations

This section is due to HRV’s discussion of product diversification in economic development. Hausmann and Hidalgo (2011) and Klinger and Lederman (2006) use trade data for tests. Trade data underestimates the diversification of the Russian economy because Russia is a big domestic market and its export mostly consists of commodities. Anyway, here’re the numbers:

Source
Source

The primary industries dominate export, but implications are unclear. First, there’re currency appreciation and other distortions caused by high commodity prices. Second, productivity of the entire economy may be insufficient to compete under current exchange rates. It means that developing new industries wouldn’t diversify export.

Does the economy need some domestic diversification? If the economy desperately needed some intermediate inputs, it would exhaust the trade surplus. But the trade balance is positive, and import consists mostly of consumer goods, not of important or innovative capital goods.

Overall, creating new productive industries is a good idea, but it’s called venture investing. And few investors are good at this trade.

Summary

Industrial policy and competition restrain growth. But they are the free lunch: an opportunity to get more output using the same inputs. Meanwhile, the inputs, capital and labor, won’t come to Russia in the nearest future. So this lunch is more desirable than policies aimed at investments and labor participation.

Twitter, Brevity, Innovation

Singapore’s Minister for Education [sic] recollects his lessons from Lee Kuan Yew:

I learned [from Lee] this [economy of effort] the hard way. Once, in response to a question, I wrote him three paragraphs. I thought I was comprehensive. Instead, he said, “I only need a one sentence answer, why did you give me three paragraphs?” I reflected long and hard on this, and realised that that was how he cut through clutter. When he was the Prime Minister, it was critical to distinguish between the strategic and the peripheral issues.

And that’s what Twitter does. It teaches brevity to millions. Academics and other professionals who face tons of information daily must love it. First, because it saves their time. Second, it prioritizes small pieces of important information.

Emails and traditional media do this badly because people can’t resist the temptation to get into “important details.” But my details are important only after you asked for them. And Twitter restrains me from writing them in advance by leaving me only 140 characters (right now, I’m over 100 words already). So, it saves two people’s time. As Winston Churchill, himself a graphomaniac, said, “The short words are the best.”

Short messages earn most interactions
Short messages earn most interactions (Source)

Like many other good ideas, this wasn’t the thing founders initially had in mind. They had to cut all messages to 140 characters to make them compatible with SMS and, thus, mobile. Later on, web services, such as Imgur, borrowed this cutoff. This time not as technical restriction, but to improve user experience. That’s an easy part.

The second part is difficult. Twitter is bad at prioritizing information. Tags and authors remain the major elements of structure. Search delivers unpleasant experience (maybe this made Twitter cooperate with Google). If you missed something in the feed, it’s gone forever.

This weak structure is partly due to initial engineering decisions. However, structuring information without user cooperation is difficult everywhere. And users won’t comply as twits should be effortless by design. It means engineers have to do more of hard work. In turn, it costs money and time. There must be strong incentives to do this. The incentive is not there because Twitter lacks competition.

Would anyone step in and fix it? Suppose, you’re taking a cheap way and ask users to be more collaborative. You can make Twitter for academics with all the important categories, links, and whatever helps researchers communicate more efficiently. This alternative will likely—if it hadn’t yet—fail to gain a critical mass of users. Even in disciplined organizations, corporate social networks die due to low activity. Individually, employees remain with what others use. The others use what everyone uses, and everyone uses what he used before. You need something like a big push to jump from the old technology.

Big pushes away from Twitter is more like science fiction now. Whatever deficiencies it has, the loss-making company priced at $30 billion dollars wins over better-designed newcomers. In the end, its 280 million users are centrally planned by Twitter’s CEO. That’s about the population of the Soviet Union by 1991.

It’s not new that big companies lock users in their ecosystems. The difference is, sometimes it’s justified, other times it’s not. For Twitter, it’s difficult to imagine any other architecture because major social media services all impose a closed architecture with third-party developers joining it on slavery-like conditions. To take the richest segment, most of iOS developers don’t break even. So, apart from technical restrictions that Twitter API has, the company doesn’t offer attractive revenue sharing options to developers that contribute to its capacities and, thus, market capitalization. For example, to address the structural limitations mentioned before.

All in all, interesting experiments in making communications more efficient end very quickly as startups reach traction. After that moment, they become conservative, careful, and closed. And this is a step backward.

How Google Works: Unauthorized Edition

Over the years Google earned a reputation as a unique workplace endlessly generating great innovations. This image of an engineering wonderland missed many important aspects of the company’s inners. You could expect Google’s management to be a bit more critical about this. But as Eric Schmidt’s new book How Google Works shows, it’s not the case. The book reestablishes all the major stereotypes, while paying little attention to the things that made up 91% of Google’s success.

Revenue: Auctions

The 91% is the share of revenue Google generates from advertising sold at the famous auctions occurring each time when someone opens a webpage. While an auction is an efficient way of allocating limited resources such as ad space, these ad auctions squeeze advertisers’ pockets in favor of the seller, that is, Google and its affiliates.

In economic terms, auctions eliminate consumer surplus:

Wikipedia
Wikipedia

That’s a “normal” market, when advertisers pay the equilibrium price. Instead, Google takes the entire surplus by selling ads in individual units—each for the maximum price advertisers would pay. The blue supply curve is nearly flat in this case, and the prices go along the red demand curve. Technically, advertisers pay the second highest price—the mechanism chosen by Google for stability (see generalized second-price auction and Vickrey auction)—but in intensive competition the difference between the first and second prices is small.

How does it work in practice? Suppose you are looking for a bicycle and just google it. When your AdBlock is off, you see something like this:

screenshot

Now, you click on “made-in-china.com,” buy whatever it sells, and have your bicycle delivered to you. Made-in-China.com pays about $2.72 to Google for you coming through this link (you can find prices for any search query in the Keyword Planner). This price is determined during the auction, when many bicycle sellers automatically submit their bids and ad texts attached to the bids.

The precise auction algorithm is more complex than just taking the highest bid, because the highest bid may include an ad that you won’t click on and the opportunity will be wasted. Also, since conversion rates are way below 100%, Made-in-China.com has to pay these $2.72 several times before a real buyer comes by. It increases the price of bicycles the website sells. Some insurance-related ads cost north of $50 each—all paid by insurance buyers in the end.

Though this mechanism would make no sense without users attracted by Google’s great search engine, the mechanism takes most out of customers—and transfers it to Google.

Retention: Monopoly

How does Google Search attract users? Well, first, by showing them relevant results. It sounds more trivial now than it was ten years ago. Now users expect Amazon.com to be the first link for almost any consumer good and Wikipedia for topics of general interest. These websites are considered the most relevant not because they’re the best in some objective sense, but again because of particular technologies that made Google so successful.

Larry Page and Sergey Brin’s key contribution to their startup was PageRank algorithm. PageRank is patented, but the underlying algorithms are easy to find in graph theory. The more links point to your website, the higher position your website gets in search results. When I google “PageRank,” I have Wikipedia’s article on the top. When I link to this article here, it becomes more likely that Wikipedia’s article will remain at the top. As a side effect, linking to the first page of Google results creates a serious competitive advantage for top websites. For Wikipedia, it may be a plus as more people concentrate on improving its pages. But strong positions in search results also secure Amazon.com’s monopoly in e-commerce.

Google’s search technologies are supported by its intensive marketing efforts in eliminating its competitors. Google paid Mozilla for keeping Google as its default search all along before Yahoo! outbid it in 2015. Four years ago, Eric Schmidt testified at Senate hearing about unfair competition practices by Google regarding search results allegedly biased in favor of Google services. The European Commission investigates Google’s practices in Europe. In mobile markets, Google demands from hardware manufacturers to install Google Mobile Services on all Android devices—so users go after their status quo bias and stay with Google everywhere.

There’re more fascinating examples of Google protecting its market share. They’re missing in Eric Schmidt’s book, which gives all credit to Google’s engineers and nothing to its lawyers and marketing people.

Development: Privileges

When a typical business creates something, managers carefully look after costs. They negotiate with suppliers, look for quality, build complex supply networks, balance payments, insure their company from price shocks. Google is the fifth largest company in the world, but it’s mostly free of these headaches. Unlike Walmart, ExxonMobil, or Berkshire Hathaway, Google employees make things out of thin air and outsource routines, like training its search engine, to third parties.

It ensures that even entry- and mid-level employees are extremely skillful. Not surprisingly, most Google legends concern its HR policies. These legends split into two categories: that make sense and that don’t.

The culture stuff is what makes no sense. It’s easy to see in non-policies like granting 20% of time to personal projects. This rule might mean something for car assembling jobs; but here it’s software development. An engineer’s personal projects may take 50% of the time if he’s done his daily job—or zero otherwise. It depends on his ability to deliver results expected from his salary. More importantly, his personal projects belong to Google, even if he delivers his daily projects in time but once edited his personal code at the campus.

The book also mentions the 70/20/10 rule: “70 percent of resources dedicated to the core business, 20 percent on emerging, and 10 percent on new.” Even if the authors could prove that the rule is optimal, most other companies are so limited in resources that they have to put 100 percent into the core business.

Neither real things make the Google culture different. Each employee must have a decent workplace, attention, and internal openness, but these things are not sufficient for a great company. We are not in Ancient Greece. Other companies also treat employees well: not much slavery around, meals are fine. Google just tends to be at the extreme.

Laszlo Bock, SVP of People Operations, tried to dissuade the public from thinking that good HR policies require Google’s profit margins. In his opinion, you can get much out of people with openness and good treatment alone. His examples include telling employees about sales figures. It’s sort of an alienated example. First, sales numbers aren’t always as optimistic as Google’s history. Ups and downs, you know. You have to learn how to communicate downs to employees and keep them optimistic.

Soberness appears in less fortunate startups. Evan Williams of Blogger had the moment when the money ran out and employees didn’t appreciate it: “Everybody left, and the next day, I was the only one who came in the office” (from Jessica Livingstone’s Founders at Work, a good, balanced account on early days at startups). It’s just one example that relationships with employees are not as trivial as Bock presents them.

If not culture, then what makes the difference? Quite trivially, the privileged access to job candidates. First, it’s not about money because Google easily outbids everyone else. Its entry-level wages surpass those of Wall Street firms, including major hedge funds, like Bridgewater Associates and Renaissance Technologies. Second, Google has the right of the first interview. That comes with exceptional reputation, low-stress jobs, secure employment, ambitious goals and resources to implement big ideas.

So What?

How Google Works understates the actual achievements of the company. The book is all about famous corporate rules making the business look simplistic. It’s not. The $360 bn business consists of hundreds of important details in each key operation, like hiring, marketing, and sales.

Keeping these things together is an achievement of Eric Schmidt, Laszlo Bock and other executives. However, Schmidt’s book should not mislead other entrepreneurs into thinking that the 20% rule creates great products and reporting sales numbers to employees increases sales better than ad auctions do. Google is a good role model for learning hardcore IT business, but readers will have to wait for some other book to learn from this company.

Amazon Monopoly Is Better than Free Market

Source
Source

Amazon’s monopoly in online retail got some attention recently. For years, Amazon restrains other online retailers with predatory pricing and covers its own losses with debt. The debt is cheap because the company enjoys a low-risk monopoly and can fix its profit margin anytime. But there’s no need to do so until lenders see this huge advantage.

That’s a bad thing about Amazon. A good thing about Amazon is that the company invests in new technologies and better service. Customers don’t have to stay 20 minutes on the line or worry about out-of-stock goods. Scale and scope of Amazon operations made this possible.

Online retail in Russia shows how the world without this kind of a monopolist looks like. Thousands of firms sell about the same stuff. Independent price search engines (of the kind Google tried to be) ensure competitive prices, hence, profit margins remain low. Low profit margins kill any chances for these small retailers to invest in service and inventories. Sellers economize on everything. In the end, customers buy stuff across dozens of shops with equally poor service and red tape, like entering all your personal information again and again.

An Amazon-type retailer isn’t emerging in Russia because the entry cost is already high and VC firms dislike investing in highly competitive markets. In turn, online retailers have no sources of capital to reach Amazon’s benefits of scale. “What about Walmart?” you may ask. It had emerged out of a competitive industry and became the largest company in the country.

It seems that Walmart appeared at the right moment. When Sam Walton opened his first shop, the United States invested heavily in new highways and railroads. Import of goods and interstate trade intensified. You need efficient logistics to keep prices down in this game. Walmart had it and became a monopolist in its segment. It entered international markets too late, as with Germany, and failed there.

Such technological revolutions rarely happen. But when they do—as with Walmart, Amazon, or Russian retailers—they shape the market for years. The sequence is important. You do want an Amazon-type monopolist to explore efficient practices in the beginning. Thousands of small firms don’t innovate like this, so low quality reigns the free market for years, which is a real loss. But after you have the monopolist that sets high standards, governments need to understand how to regulate it to revive competition.

How Competition Destroys Confidence

I’ve been wondering why Gallup excludes science and academia from the list of institutions Americans can be confident in. The firm runs that poll since 1973 and had 40 years to ask. Maybe it didn’t because academia itself is far from being influential. Anyway, there’re other poll, with scientists mentioned:

Source
PEW
Source
PEW

A bigger puzzle is, why do these exact institutions earn so much confidence?

Source
Source

After all, the military have lots of secrets, the clergy is not accountable, the medical system charges a four-digit bill for dealing with a cold. What happens on the other end? The President and Congress, which are elected by people. The media—they’re open, you can check them.

Any versions? Well, the military, police, church, supreme court are monopolies. In contrast, the President, Congress, and the media furiously compete (often with each other). In 80 years, you can live through ten presidents and visit the same church every Sunday. You have ten local newspapers, but one police department.

Openness creates opportunities for information disclosure. Illustratively, competition between Obama and Republicans reveals lots of details. These blame games, which are impossible in hierarchical and secretive monopolies, reveal facts biased against the other side. And negative information is more powerful than good stories. As Warren Buffett said, “It takes 20 years to build a reputation and five minutes to ruin it.”

This competition creates confusion and distrust, but it also gives a chance to compare different opinions and understand who does better. Public institutions holding a monopoly grant no chances. So, they seem to be fine. If they say that the Earth is squared, then it’s squared—who would question this obvious fact?

Answering questions already answered on Stack Exchange

In one of the previous posts, the relative distribution of upvotes on Stack Exchange showed that adding one more answer to a question isn’t much demanded by readers because readers give at least half of upvotes to the first-placed answer alone:

(y-axis: the answer’s mean fraction of upvotes in a question; x-axis is the position of the answer)

But the absolute number of upvotes tells why answers still appear:

(y-axis: mean of upvotes; other notation is the same)

Questions with few answers happen to be unpopular in general (that’s why they have fewer answer in the first place). You can barely notice upvotes for questions with a single answer. But they grow as the graph says. The last frame describes the case of 16 answers, but it’s better to be careful beyond that because the original sample has too few questions with many answers.

The bottom line is that answering a question with many answers may be more useful to users than answering questions with no answers at all. That’s at least valid under the assumption of unconditional expectations. Controlling for time is the next most useful thing to do to understand how demand and supply operate in Q&A markets and what can be done to make them more efficient.

Athletes vs. data scientists

Competitions among athletes have quite a long history. Armchair sports don’t. Chess, which comes to mind first, became an important sport, but only in the 20th century.

An even younger example is data-related competitions. Kaggle, CrowdANALYTIX, and HackerRank are major platforms in this case.

But do data scientists compete as furiously as athletes? Well, in some cases, yes. Here’s one example:

(see appendix for how the datasets were constructed)
Merk and Census competitions have about the same number of participants and comparable rewards (but winners for the Census competition were restricted to US citizens only). It may seem surprising that their results look so different. I’ll get back to this in the next post on data competitions.
Technically, all the competitions look alike. The lower bound is zero (minutes, seconds, errors), though only the baseline comparison makes sense. Over time, the baseline for sports declined:
(Winning time for 100m. Source.)
A two-second (-18%) improvement in 112 years.
Competitions in a single dataset look like this (more is better):
(Restricted sample taken from chmullig.com)
In general, the quality of predictions substantially increase over a few first weeks. Then marginal returns from efforts decrease. That’s interesting because participants make hundreds of submissions to beat numbers three places beyond the decimal point. That’s a lot of work for a normally modest monetary reward. And, well, the monetary reward makes no sense at all. A prize of $25–50K goes to winners who compete with 250 other teams. These are thousands of hours of data analysis, basically unpaid. This unpaid work doesn’t sound attractive even to sponsors (hosts), which are very careful about paying for crowdsourcing. So, yes, it’s a sport, not work.
Athletics has no overfitting, but that’s an issue in data competitions. For example, comparison between public and private rankings for one of the competitions:

Username Public rank Private rank
Jared Huling 1 283
Yevgeniy 2 7
Attila Balogh 3 231
Abhishek 4 6
Issam Laradji 5 9
Ankush Shah 6 11
Grothendieck 7 50
Thakur Raj Anand 8 247
Manuel Días 9 316
Juventino 10 27

(Source)

The public rank is computed from predictions on the public dataset. The private rank is based on a different sample unavailable before the finals. The difference is largely attributed to overfitting noisy data (and submitting best-performing random results).
In data competitions, your training is not equal to your performance. That’s valid for sports as well. Athletes break world records during training sessions and then finish far away from the top in real competitions.
This has a perfectly statistical explanation, apart from psychology. In official events, the sample is smaller. A single trial, mostly. Several trials are allowed only in non-simultaneous sports, like high jumps. The sample is many times larger during training. And you’re more likely to find an extreme result in a larger sample.
Anyway, though these games look like fun and games, they’re also simple models for understanding complex processes. Measuring performance has value for human lives. For instance, hiring in most firms is a single-trial interview. And HR folks use simple heuristic rules for candidate assessment. When candidates are nervous, they fail their trial.
Some firms, like major IT companies, do more interviews. Not because they want to help candidates, but because they have more stakeholders whose opinion matters. But this policy increases the number of trials, so these companies hire smarter.
We don’t have many counterfactuals for HR failures, but we can see how inefficient single trials are compared to multiple trials in sports.

Appendix: The data for the first graph

This graph was constructed in the following way.

First, I took the data for major competitions:

  • Athletics, 100m, men. 2012 Olympic Games in London. Link.
  • Biathlon, 10km, men. 2014 Olympic Games in Sochi. Link.
  • Private leaderboard. Census competition on Kaggle. Link.
  • Private leaderboard. Merk competition on Kaggle. Link.
Naturally, ranking criteria are different. Minutes for biathlon, seconds for athletics, weighted mean absolute error for Census, and R^2 for Merk. All but Merk use descending ranking, when less is better. I converted metrics for Merk to descending ranking by taking ( 1 − R^2 ). That is, I ranked players in the Merk competition by the variance left unexplained by the models.
Then in each competition, I took the first place’s result as 100% and converted other results as percentage of this result. After subtracting 100, I had the graph.