Challenges for Online Lending Marketplaces

In 2016, online lending marketplaces (OLM) took a hit. Lending Club disclosed falsification of loan data and CEO’s misbehavior. Graduate Leverage’s principal was sentenced to nine years for stealing  $16mn of investors’ money. cut one fourth of its staff. On Deck Capital reduced its growth projections.

Poor performance by the leaders is not an exception. Lending marketplaces in their current form share several common problems.

Tougher regulations

Marketplaces’ misbehavior and increasing volumes of credit attract more scrutiny from federal and state regulators:

  • Interstate operations may require honoring local state anti-usury laws that restrict the interest rates from above. Ongoing court litigations will determine whether loan resellers, such as Lending Club (LC), should be excluded from the federal anti-usury law exemptions.
  • US Treasury, FDIC, and CFPB start requesting more information from lending marketplaces and their partners.
  • The Equal Credit Opportunity Act protects borrowers from discrimination. Banks may avoid discrimination lawsuits by pooling discriminated groups together with other borrowers at higher interest rates. Some online marketplaces simply post loan applications, so disadvantaged borrowers are not rejected directly but simply may be unable to find buyers. Over the recent years, OLMs moved toward more centralized models where loans are packaged and sold by the platform. This can make the platform responsible for unbalanced portfolios of loans. It also applies to selective marketing practices. SoFi, for example, accepts borrowers only from top-tier universities.
  • The risk of being classified as an investment company or a broker-dealer, with more compliance requirements and disclosures.

These regulations increase operating costs and the net interest spread. Which brings us to the other drivers of costs.

Higher costs

Some OLMs present themselves as low-cost, efficient lenders (see Figure 1 and 2 in the appendix). This is doubtful:

  • Being lenders themselves, Santander and JP Morgan bought more than $2bn of Lending Club loans. It means that operating expense of 2-3% goes on top of “traditional lender” operating expense of 5-7% (Figure 1). Institutional investors generate two thirds of LC’s sales.
  • Instead of removing intermediates, as claimed, OLMs add themselves and issuing banks to the chain between borrowers and lenders. See Figure 3: “Investors” buying private instruments (certificates and loans) on the right are professional investors: banks, investment funds, and funds of funds. Each taking commission and creating costs for their clients, so you can extend this scheme to more intermediates before the payments reach the end saver. Public notes sold directly to retail investors generate only one third of sales.
  • No more NOLs. Many OLMs operate with losses that allow them to accumulate tax deductions for the future. Although a general feature of all startups, carryforwards in lending contribute more to profitability because of thin margins and slow growth.


  • Institutional investors are vital to marketplaces. OLMs depend on them for credit lines, sales, funding. Institutional investors enter slowly and quit fast, as it was with LC in May. Such exits create holes in income statements because marketplaces can’t cut fixed costs proportionately to the decline of originated loans.
  • Banks and investment funds may also quit in response to regulatory pressure on marketplaces. Marketplaces enjoy loose regulations, but recent trends are killing this advantage.
  • The buy side uses leverage — another contradiction to the initial idea of the non-levered, non-fractional marketplace.
  • Retail investors don’t do due diligence. OLMs offer retail investors diversified portfolios of personal loans. One investor lends to dozens of borrowers. If he invests $10,000 in 100 prime loans, he earns $500 a year, or $5 per borrower. Due diligence becomes non-economical with these numbers. So retail investors can’t add a personal expertise to the screening process. The platform sets rates itself and sells loans in bulk.
  • The interest rate advantage is overstated. See footnotes of Figure 2: The borrowing rate of 20.7% comes from the LC’s customer survey. That is a credit card sort of rates. Its interpretation is impossible without LC telling the public how they conducted the survey. Secondly, the ROI of 0.06% is a straw man. LC is not a FDIC-insured depository institution. LC sells illiquid unsecured loans. The appropriate asset for comparison would be corporate bonds, which yield 5% for comparable risk. Corporate bonds have a performance history and can be sold without paying a 1% penalty set by LC’s secondary market.


  • The Fed rate hike. OLMs emerged in the late 2000s when the interest rate had been cut to zero and many borrowers could refinance their loans on better terms. As the Fed raises the rate, refinancing and borrowing in general lose their charm.
  • Fewer prime borrowers. Online marketplaces enjoyed an early influx of tech-savvy, doing-well borrowers. Now the borrower base deteriorates and the default rate increases.
  • New data won’t help finding better borrowers. Marketplace underwriters use credit score by TransUnion, Experian, and Equifax. That’s what banks use. Some OLMs also employ unconventional data: borrower’s university, degree, social media activity. That’s barely an advantage. (1) Alternative data is often correlated with credit scores, which makes it redundant for screening;  (2) Banks have access to the same new sources of data, and big banks have much more than that; (3) Some indicators imply discrimination: if you lend to the students of WASPy universities only, expect lawsuits; (4) Frauds based on falsification of alternative data — this may be the most innocent problem. In general, the alternative data has a neutral impact on the competitive advantage in finance: everyone can have it.

No international growth

American startups get high valuation because only they expand internationally. Lending is a highly regulated industry with each country having peculiar rules up to bans on foreign-owned banks. Online lenders currently struggle with state-specific regulations, and serving clients outside the United States is even more problematic.


Lending marketplaces comprise 8 out of 21 financial companies in the WSJ Billion Dollar Club of young private companies. Lending Club and On Deck are public and marked to market. These companies represent two general models of online lending: the marketplace and the single-lender credit line. They also target different borrowers (consumer vs small business). Consumer lending marketplaces are hit hardest by all the above problems. But small business lenders also promise to help underbanked business owners like banks never tried to finance SME before. Anyway, the private equivalents of Lending Club and On Deck will see down rounds after this year’s events.

OLMs did not create a new sector with high margins. They’re ending up paying regulatory and marketing costs in a commodity market. Still, online lenders have (had?) hi-tech-ish multipliers. Perhaps the better comparables would be credit unions and regional banks.

Banks ended up being big after years of M&As. They actually addressed the mentioned problems with growth (and being lucky to survive). Online lending marketplaces don’t have a particular edge against traditional banks. So their organic growth follows the industry’s average. Exit through acquisition? Goldman Sachs recently opened its own personal lending platform, instead of buying an existing player. It seems independent marketplaces need banks more than banks need them. Which makes acquisitions also unlikely.


Figure 1: Lending Club Cost Advantage. Source: LC investor presentation.
Figure 2: Lending Club Rates Advantage. Source: LC investor presentation.
Figure 3: Lending Club Business Model. Source: LC 2015 10k.

Disclaimer: Not an investment advice. For information purposes only. No affiliation with or material interest in the companies mentioned. The future tense is not a promise.

Best Time to Post? It’s Irrelevant

While social media invent various algorithms to show relevant information to users, companies like Buffer are trying to understand how to circumvent these algorithms to promote their clients’ content. This is not necessarily a zero-sum game, as it may seem. Optimizers add more structure to the content, pick relevant addressees, and distribute content to the media where information overload is less extreme.

The simplest problem around is to pick the best time for posting when you already have certain content. I looked into this once for, and the optimal timing happened to depend a lot on the subsite in question. StackExchange is a network of Q&A websites built on a common technology but with somewhat segregated users and different rules. The subsites look alike, integrated, and you normally expect the common features to prevail over everything else. But according to the data, the patterns of performance, such as time-to-answer, vary across the subsites. The soft rules—those that are not engraved in the common software code—and people make them vary.

Here’s another example: Y-Combinator’s Hacker News, which has a solid community and transparent ranking algorithm. The rules are simple: a user submits a link and title, the community upvote this submission. Good submissions make the front page, bad submissions are unread and forgotten. The service receives more than 300,000 submissions annually. The question is the same: given a submission, what’s the best time to post it? I took the number of expected upvotes as the criterion.

Many studied the Hacker News dataset before. A good example is this one. There’s even a special app for picking the time (I didn’t get what it does exactly). They answered different questions, though.

Here’s my version of events. In this post, however, I’d make another point based on this data.

First, just looking at upvotes shows that weekends are the better days for posting (0 is Monday, 6 is Sunday):


In particular:


However, this approach can’t say much. Time affects not only users who read links submitted to Hacker News (demand), but also those who submit the links (supply). You have causation suspects right away. Like, maybe users submit better links on weekends because they have more time to pick the good ones. Then scheduling the same submission of yours to weekends would not increase the upvotes it gets.

For a bunch of typical reasons (few variables available, unstructured data, and no suitable natural experiments), the impact of time on upvotes is hard to separate from other factors. You have only indirect evidences. For example, less competition on weekends may increase expected upvotes:


It remains unclear how to sum up indirect evidences into conclusions. Statistical models would disappoint. Time-related variables explain less than 1% of variation—meaning, unsurprisingly, that the other 99% depend on something else. This something includes the page you link to, the readers, and nuances of Hacker News’ architecture.

My point is, even a simple algorithm can be efficient, meaning, its outcome is independent of irrelevant factors, like time. A complex algorithm may perform worse, in fact. If content promotion depends on the author’s social capital (followers, friends, subscribers), ranking relies on the author’s past submissions rather than the current one. So, Facebook’s or Quora’s algorithms for sorting things for users are not only harder to pass through; they also may distort important outcomes.

See also: Python notebook with Hacker News data analysis

Alibaba, The State

Alibaba is sort of doing fine after the IPO. But what does it do? It replaces the state.

Roughly, if a firm picks a supplier, it wants supplies to be fine and to arrive in time. The supplier, in turn, want to make sure that the client pays as agreed.

Now, there are two ways to provide it for sure. Option A is the threat of legal actions if things went terribly wrong. Option B is to avoid bad partners at all. The state offers both options. It has licensing and regulators to prevent very bad companies from operating in the market. And the state also has a more traditional function of bashing bad businesses for violating the law.

Obviously, Alibaba is not the British East India Company—it cannot apply violence freely. But it does offer an alternative to government regulations, especially in countries where governments are not trusted. The website routinely offers inspections and secure payments. It encourages buyers to leave feedbacks. As a matter of punishment, it can ban businesses from the marketplace.

Alibaba reduces the risk, which would otherwise require more resources to meet. Though private inspections, insurance, and feedbacks have been there for centuries, IT technologies made them extremely centralized and embedded in a single company. The state also implies a monopoly—and online marketplaces have it! Amazon, eBay, and Alibaba have no strong competitors in their respective markets.

Does this replacement for weak governance affect economic development? Possibly. Alibaba is an international trade hub. Normally, small and medium enterprises are reluctant to deal with international partners due to uncertainty. For example, the World Bank points at political risks:


Those investors who actually work in emerging markets estimate the risk as being three times lower than that by investors who don’t consider investing in emerging markets at all. Uninformed investors overstate risks and stay away from what can be a perfectly normal market.

Many of the B2B transactions mediated by Alibaba might not have happened at all without the relevant information. For one reason, the baseline risk is high as governments in emerging markets are reluctant to prosecute local crooks. For another, western mass media cover these markets biasedly. Someone who read the Financial Times throughout 2014 might have an impression that China is nothing more but corruption, political trials, empty infrastructure, ghost cities, and permanently slowing down economic growth. Even if these materials are not necessarily biased against one country (the media look for a drama everywhere, right?), the readers can’t simply go out in the streets and check how things really are, as for domestic coverage. Therefore, businesses need a middleman who is more motivated than the state and more systematic than the media in helping shoppers in emerging markets.

Twitter, Brevity, Innovation

Singapore’s Minister for Education [sic] recollects his lessons from Lee Kuan Yew:

I learned [from Lee] this [economy of effort] the hard way. Once, in response to a question, I wrote him three paragraphs. I thought I was comprehensive. Instead, he said, “I only need a one sentence answer, why did you give me three paragraphs?” I reflected long and hard on this, and realised that that was how he cut through clutter. When he was the Prime Minister, it was critical to distinguish between the strategic and the peripheral issues.

And that’s what Twitter does. It teaches brevity to millions. Academics and other professionals who face tons of information daily must love it. First, because it saves their time. Second, it prioritizes small pieces of important information.

Emails and traditional media do this badly because people can’t resist the temptation to get into “important details.” But my details are important only after you asked for them. And Twitter restrains me from writing them in advance by leaving me only 140 characters (right now, I’m over 100 words already). So, it saves two people’s time. As Winston Churchill, himself a graphomaniac, said, “The short words are the best.”

Short messages earn most interactions
Short messages earn most interactions (Source)

Like many other good ideas, this wasn’t the thing founders initially had in mind. They had to cut all messages to 140 characters to make them compatible with SMS and, thus, mobile. Later on, web services, such as Imgur, borrowed this cutoff. This time not as technical restriction, but to improve user experience. That’s an easy part.

The second part is difficult. Twitter is bad at prioritizing information. Tags and authors remain the major elements of structure. Search delivers unpleasant experience (maybe this made Twitter cooperate with Google). If you missed something in the feed, it’s gone forever.

This weak structure is partly due to initial engineering decisions. However, structuring information without user cooperation is difficult everywhere. And users won’t comply as twits should be effortless by design. It means engineers have to do more of hard work. In turn, it costs money and time. There must be strong incentives to do this. The incentive is not there because Twitter lacks competition.

Would anyone step in and fix it? Suppose, you’re taking a cheap way and ask users to be more collaborative. You can make Twitter for academics with all the important categories, links, and whatever helps researchers communicate more efficiently. This alternative will likely—if it hadn’t yet—fail to gain a critical mass of users. Even in disciplined organizations, corporate social networks die due to low activity. Individually, employees remain with what others use. The others use what everyone uses, and everyone uses what he used before. You need something like a big push to jump from the old technology.

Big pushes away from Twitter is more like science fiction now. Whatever deficiencies it has, the loss-making company priced at $30 billion dollars wins over better-designed newcomers. In the end, its 280 million users are centrally planned by Twitter’s CEO. That’s about the population of the Soviet Union by 1991.

It’s not new that big companies lock users in their ecosystems. The difference is, sometimes it’s justified, other times it’s not. For Twitter, it’s difficult to imagine any other architecture because major social media services all impose a closed architecture with third-party developers joining it on slavery-like conditions. To take the richest segment, most of iOS developers don’t break even. So, apart from technical restrictions that Twitter API has, the company doesn’t offer attractive revenue sharing options to developers that contribute to its capacities and, thus, market capitalization. For example, to address the structural limitations mentioned before.

All in all, interesting experiments in making communications more efficient end very quickly as startups reach traction. After that moment, they become conservative, careful, and closed. And this is a step backward.

It’s a Wonderful Loan: Economics of P2P Lending

The Financial Times wonders why big banks are going after P2P lending. Why do banks need companies like Aztec Money and Lending Club, which have negligible credit portfolios and messy business model? Well, banks themselves might say about their motivation in this case (so far they didn’t), but I can think of a good economic reason why they should pay attention to P2P lending.

This reason is older than the Internet, computers, and banks themselves. It’s information about the borrower. In between conspiracies against the public, banks do a very useful thing: they take off the lender’s headache about the borrower’s payback. Banks have to know their borrower well. And typically, they do and keep the net interest spread low. Here’s the rates for banks and credit unions:


Credit unions have been in the industry like forever. They would fit what the FT names “democratizing finance” and have much in common with the ideology behind P2P technologies. Credit unions have higher deposit rates and lower interest in the table because they know more about borrowers. Unions lend only to trusted folks and the number of individual defaults decreases, so you see better rates. Better rates also mean an even lower probability of default, so it’s reinforcing.

The Grameen Bank (and Nobel laureate Yunus) played this idea brilliantly. They radically reduced the market interest rates in poor countries, where high rates coupled with high default rates had been strangling the economy. The Grameen Bank entered very much like a credit union. Borrowers had to provide references from local peers to get access to money. The interest rates have been reduced from 50–100% annually to a single-digit number.

The Grameen-type firms and credit unions are limited in geography and expertise. You could back only your neighbor and only in a very simple business. If he tells you he’ll buy a cow to sell milk, you’re okay. But if a guy on the other coast needs a credit line to build “radar detectors that have both huge military and civilian applications,” you want to know the risks better. That’s why in a complex economy, Grameen is no longer relevant. Each loan application requires more information about the borrower, his credit history, and, most importantly, the purpose of the loan.

The purpose is vital for business loans. Banks learned to dig information about the borrower and to come up with the individual probability of default (you can try to predict yourself). But they’re getting worse in knowing the client’s business. First, businesses are getting more complex. Second, banks reduce their human workforce and local branches, while local branches provided a lot of soft information on borrowers and their performance. Jimmy Stewart’s banking was about observing his little town’s economy and deciding what would be creditworthy there. Without this source, banks pool risks and set higher interest rates, deterring borrowers.

Here comes online P2P lending. When a nuclear physicist from CERN lends money to a nuclear physicist from NASA via P2P system, it tells something about the borrower’s project. The guy from CERN is the right guy to judge. He also throws his own money into this. And that solves both the complexity (you can always find a lender-investor with the right expertise) and neighborhood problems (an expert comes from anywhere). Plus it’s technically free. The CERN physicist has already done the job banks couldn’t do: he found the borrower’s project, evaluated, and approved it. It looks like an investor’s job, and it is. P2P lending platforms like Kiva do mix investing and lending. Users do informal research before lending money.

This info allows banks do P2P loan matching (like some VC and foundations do), buy individually-backed loans, securitize them, and so on. This is a rare example when new technologies are not eating someone else’s pie (like YouTube does to mass media) but create their own. Without this easy expert-loan matching, businesses face higher interest rates, often above their breakeven point, which means no business at all.

Still, P2P platforms themselves seem distracted from this advantage. Most reasoning behind them mentions phantom problems like “predatory interest,” much paperwork, and refused applications in traditional banking. These are not the problems. The financial industry is highly competitive even after the series of post-80s M&A. It evaluates the risks with huge volumes of data, hires good quants, and saves a great deal on scale. In fact, the low market capitalization of major banks indicates that they have no means to “exploit” customers (Google and Amazon do, though in a delicate manner, as here and here). So net interest margin declines:


The bank’s paperwork and rejections are just the costs of low interest rates. It makes no sense for startups to “fix” banking in this direction because it’ll increase the rates—sort of getting the industry back into prehistoric times. The information flows between lenders and borrowers is the real thing to focus on.

How Google Works: Unauthorized Edition

Over the years Google earned a reputation as a unique workplace endlessly generating great innovations. This image of an engineering wonderland missed many important aspects of the company’s inners. You could expect Google’s management to be a bit more critical about this. But as Eric Schmidt’s new book How Google Works shows, it’s not the case. The book reestablishes all the major stereotypes, while paying little attention to the things that made up 91% of Google’s success.

Revenue: Auctions

The 91% is the share of revenue Google generates from advertising sold at the famous auctions occurring each time when someone opens a webpage. While an auction is an efficient way of allocating limited resources such as ad space, these ad auctions squeeze advertisers’ pockets in favor of the seller, that is, Google and its affiliates.

In economic terms, auctions eliminate consumer surplus:


That’s a “normal” market, when advertisers pay the equilibrium price. Instead, Google takes the entire surplus by selling ads in individual units—each for the maximum price advertisers would pay. The blue supply curve is nearly flat in this case, and the prices go along the red demand curve. Technically, advertisers pay the second highest price—the mechanism chosen by Google for stability (see generalized second-price auction and Vickrey auction)—but in intensive competition the difference between the first and second prices is small.

How does it work in practice? Suppose you are looking for a bicycle and just google it. When your AdBlock is off, you see something like this:


Now, you click on “,” buy whatever it sells, and have your bicycle delivered to you. pays about $2.72 to Google for you coming through this link (you can find prices for any search query in the Keyword Planner). This price is determined during the auction, when many bicycle sellers automatically submit their bids and ad texts attached to the bids.

The precise auction algorithm is more complex than just taking the highest bid, because the highest bid may include an ad that you won’t click on and the opportunity will be wasted. Also, since conversion rates are way below 100%, has to pay these $2.72 several times before a real buyer comes by. It increases the price of bicycles the website sells. Some insurance-related ads cost north of $50 each—all paid by insurance buyers in the end.

Though this mechanism would make no sense without users attracted by Google’s great search engine, the mechanism takes most out of customers—and transfers it to Google.

Retention: Monopoly

How does Google Search attract users? Well, first, by showing them relevant results. It sounds more trivial now than it was ten years ago. Now users expect to be the first link for almost any consumer good and Wikipedia for topics of general interest. These websites are considered the most relevant not because they’re the best in some objective sense, but again because of particular technologies that made Google so successful.

Larry Page and Sergey Brin’s key contribution to their startup was PageRank algorithm. PageRank is patented, but the underlying algorithms are easy to find in graph theory. The more links point to your website, the higher position your website gets in search results. When I google “PageRank,” I have Wikipedia’s article on the top. When I link to this article here, it becomes more likely that Wikipedia’s article will remain at the top. As a side effect, linking to the first page of Google results creates a serious competitive advantage for top websites. For Wikipedia, it may be a plus as more people concentrate on improving its pages. But strong positions in search results also secure’s monopoly in e-commerce.

Google’s search technologies are supported by its intensive marketing efforts in eliminating its competitors. Google paid Mozilla for keeping Google as its default search all along before Yahoo! outbid it in 2015. Four years ago, Eric Schmidt testified at Senate hearing about unfair competition practices by Google regarding search results allegedly biased in favor of Google services. The European Commission investigates Google’s practices in Europe. In mobile markets, Google demands from hardware manufacturers to install Google Mobile Services on all Android devices—so users go after their status quo bias and stay with Google everywhere.

There’re more fascinating examples of Google protecting its market share. They’re missing in Eric Schmidt’s book, which gives all credit to Google’s engineers and nothing to its lawyers and marketing people.

Development: Privileges

When a typical business creates something, managers carefully look after costs. They negotiate with suppliers, look for quality, build complex supply networks, balance payments, insure their company from price shocks. Google is the fifth largest company in the world, but it’s mostly free of these headaches. Unlike Walmart, ExxonMobil, or Berkshire Hathaway, Google employees make things out of thin air and outsource routines, like training its search engine, to third parties.

It ensures that even entry- and mid-level employees are extremely skillful. Not surprisingly, most Google legends concern its HR policies. These legends split into two categories: that make sense and that don’t.

The culture stuff is what makes no sense. It’s easy to see in non-policies like granting 20% of time to personal projects. This rule might mean something for car assembling jobs; but here it’s software development. An engineer’s personal projects may take 50% of the time if he’s done his daily job—or zero otherwise. It depends on his ability to deliver results expected from his salary. More importantly, his personal projects belong to Google, even if he delivers his daily projects in time but once edited his personal code at the campus.

The book also mentions the 70/20/10 rule: “70 percent of resources dedicated to the core business, 20 percent on emerging, and 10 percent on new.” Even if the authors could prove that the rule is optimal, most other companies are so limited in resources that they have to put 100 percent into the core business.

Neither real things make the Google culture different. Each employee must have a decent workplace, attention, and internal openness, but these things are not sufficient for a great company. We are not in Ancient Greece. Other companies also treat employees well: not much slavery around, meals are fine. Google just tends to be at the extreme.

Laszlo Bock, SVP of People Operations, tried to dissuade the public from thinking that good HR policies require Google’s profit margins. In his opinion, you can get much out of people with openness and good treatment alone. His examples include telling employees about sales figures. It’s sort of an alienated example. First, sales numbers aren’t always as optimistic as Google’s history. Ups and downs, you know. You have to learn how to communicate downs to employees and keep them optimistic.

Soberness appears in less fortunate startups. Evan Williams of Blogger had the moment when the money ran out and employees didn’t appreciate it: “Everybody left, and the next day, I was the only one who came in the office” (from Jessica Livingstone’s Founders at Work, a good, balanced account on early days at startups). It’s just one example that relationships with employees are not as trivial as Bock presents them.

If not culture, then what makes the difference? Quite trivially, the privileged access to job candidates. First, it’s not about money because Google easily outbids everyone else. Its entry-level wages surpass those of Wall Street firms, including major hedge funds, like Bridgewater Associates and Renaissance Technologies. Second, Google has the right of the first interview. That comes with exceptional reputation, low-stress jobs, secure employment, ambitious goals and resources to implement big ideas.

So What?

How Google Works understates the actual achievements of the company. The book is all about famous corporate rules making the business look simplistic. It’s not. The $360 bn business consists of hundreds of important details in each key operation, like hiring, marketing, and sales.

Keeping these things together is an achievement of Eric Schmidt, Laszlo Bock and other executives. However, Schmidt’s book should not mislead other entrepreneurs into thinking that the 20% rule creates great products and reporting sales numbers to employees increases sales better than ad auctions do. Google is a good role model for learning hardcore IT business, but readers will have to wait for some other book to learn from this company.

Don’t Listen to Jack Welch (Only His Best Part)

Jack Welch advises executives to leave their rooms and find out more about organizations they manage.

I’m afraid, this is what executives will do. Why? Welch makes two points. First, he shows the problem, which is real for sure. Knowing your organization is important. Second, he suggests to solve it by visiting “stores, trading floors, regional offices, factories.” It’s also a good point, but not the best solution.

By taking Welch’s advice literally, executives will find no more than a mess of emotions, stories, suggestions, and demands. It’s like reading a morning newspaper: you really need a lot of prejudices to make sense out of this flow of information, when this flow doesn’t have any sense. It’s best at confirming existing prejudices. If you really want to know something about the world, you should do a comprehensive study on topic.

How does it look in management? If you want knowledge, organize it. Build an IT system that let your people talk freely (even if anonymously), send requests to supervisors, get feedbacks, and discuss ideas in a single place. Not face-to-face meetings of the king and His Majesty’s subjects (it always looks this way). It must be a distant platform. A person must know it’s for real and feel no pressure.

Computers are stupid but extraordinarily good at handling whatever comes out of this. Can a human delegate 8.5 million problems to 3.7 million solvers in milliseconds? StackOverflow does this routinely and arguably saved more working hours than YouTube wasted. It’s a matter of minutes to find popular problems, topics, and experts. It’s easy to find where your help is needed. This system shows what matters.

You can spend time traveling around “stores, trading floors, regional offices, factories” to declare, like Jonny Cash, “I’ve been everywhere.” Or you can systematically improve the system that delivers real information from real people right to your armchair. An IT system is better at everything that travels can do: moods, relevant problems, upcoming disasters, and best ideas. Exciting travels, as Welch noted, show that you’re not alone. But they are not for decision making.

Computer-driven operations at Amazon and Walmart have beaten flesh-and-blood shops around the corner. These systems know what customers want, unlike shopkeepers who talk to their customers for hours each day. There must be some sense of modesty regarding own abilities to admit this, but it would be one level up in business management. The creators of Amazon and Walmart could improve because they recognized their limitations and let machines do their work.

This transformation is slow in management because of the email reputation IT systems have. They’re something delivering tons of letters you have no time to read. It’s a failure of design. Emails came from the 70s and haven’t changed since then. ERM and other “management” systems often copy emails in asking too much irrelevant information. They lack human input and the sense of importance. But that’s how public web services looked in the 1990s. Since then they’ve changed tremendously; and so will B2B systems. Don’t miss this moment traveling.