国产一级a片免费看高清,亚洲熟女中文字幕在线视频,黄三级高清在线播放,免费黄色视频在线看

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費(fèi)電子書(shū)等14項(xiàng)超值服

開(kāi)通VIP
Machine learning is easier than it looks

It’s easy to believe that machine learning is hard. An arcane craft known only to a select few academics.

After all, you’re teaching machines that work in ones and zeros to reach their own conclusions about the world. You’re teaching them how to think! However, it’s not nearly as hard as the complex and formula-laden literature would have you believe.

Like all of the best frameworks we have for understanding our world, e.g. Newton’s Laws of Motion, Jobs to be Done, Supply & Demand — the best ideas and concepts in machine learning are simple. The majority of literature on machine learning, however, is riddled with complex notation, formulae and superfluous language. It puts walls up around fundamentally simple ideas.

Let’s take a practical example. Say we wanted to include a “you might also like” section at the bottom of this post. How would we go about that?

To clarify the idea, let’s look at a naive solution:

  1. Split the current post title into its individual words
  2. Get all other posts
  3. Sort all other posts by those with the most words in their body in common with our title

Using this method to find similar posts on this blog to “How The Support Team Improves The Product”, gives us the following top 10:

  • How To Launch With A Validated Idea
  • Know Your Customers and How They Decide
  • Designing First Run Experiences To Delight Users
  • How to hire designers
  • The Dribbblisation of Design
  • An interview with Ryan Singer
  • Why Being First Doesn’t Matter
  • Proactive Support with Intercom
  • An interview with Joshua Porter
  • Retention, Cohorts, and Visualisations

As you can see, posts about running an effective support process have little in common with cohort analysis, or debate around the merits of design. We can do better.

Let’s try a real machine learning approach. We’re going to break this into two parts:

  1. Represent posts mathematically.
  2. Cluster these mathematical representations with K-Means.

1. Representing posts mathematically

If we can represent our posts mathematically, we can plot the posts, compare distances between posts, and identify clusters of similar posts.

Mapping each post to a mathematical representation is easy, we can do it in two steps:

  1. Find all words in all posts.
  2. Convert each post into an array. Each element is a 1 or a 0, denoting presence of a word. This array is of the same order for each post, as it’s based off step #1.

If @words equaled:

['hello', 'inside', 'intercom', 'readers', 'blog', 'post']

A post with the body “hello blog post readers” would be mapped to:

[1,0,0,1,1,1]

We don’t have simple tools for plotting vectors in 6-dimensions, like we do for those in 2-dimensions — but concepts like distance are easily extrapolated. (It’s also still useful to use the 2-dimensional example).

2. Clustering posts with K-Means

Now we have a mathematical representation of our blog posts — let’s try find clusters of similar posts. To do this we’re going to use a crazy simple clustering algorithm called K-Means, it can be described in 5 steps:

  1. Set ‘K’ to the number of clusters you want
  2. Choose ‘K’ random points
  3. Assign each document to its closest point
  4. Choose ‘K’ new points, from the ‘a(chǎn)verage’ of all documents assigned to each point
  5. Repeat steps 3-4. Until documents’ assignments stop changing.

Let’s visualize these steps. First, we choose 2 (i.e. k = 2) random points, in the same space as our posts:

We assign each document to its closest point:

We re-evaluate the center of each of these clusters, to be the average of all posts in that cluster:

That’s the end of our first iteration. Now we re-assign each post to its new closest point:

We’ve found our clusters! We know this because it’s obvious in further iterations that the assignments would not change.

Here’s the top 10 similar posts to “How The Support Team Improves The Product”, with this method:

  • Are you being Clear, or Clever?
  • 3 Rules for Customer Feedback
  • Asking customers what you want to hear
  • Shipping is the beginning of a process
  • What Does Feature Creep Look Like?
  • Getting Insight Into Your Userbase
  • Converting Customers with the Right Message at the Right time
  • Conversations With Your Customers
  • Does your app have a message schedule?
  • Have You Tried Talking To Your Customers?

The results speak for themselves.

We achieved all of this with less than 40 lines of code, and some simple algorithms that can be described in a blog post. However, you would never know how simple some of these ideas are from reading academic literature. Here’s an excerpt from the paper introducing K-Means (it’s hard to pinpoint the exact first introduction of K-Means, but this was the first paper to use the term “K-Means”):

The academic literature can often be useful, if you’re willing to work through the notation. However, there are a lot of excellent alternative resources that are more practical and approachable:

Give it a try

Want to suggest tags in your project management app? Or assignees in your customer support tool? Or members of a group on a social network? The chances are some simple code, and an easy algorithm will get there. So, when faced with a challenge in your product where you believe machine learning can help, don’t be discouraged.

Machine learning is easier than you might think.

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶(hù)發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊舉報(bào)。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
部署在SAP Cloud Platform CloudFoundry環(huán)境的應(yīng)用如何消費(fèi)SAP Leonardo機(jī)器學(xué)習(xí)API
英語(yǔ)原版文章:A lever is a simple machine.
7 Free eBooks every Data Scientist should read in 2020
別再反復(fù)背誦了!
Machine Learning
網(wǎng)絡(luò)公開(kāi)課資源
更多類(lèi)似文章 >>
生活服務(wù)
分享 收藏 導(dǎo)長(zhǎng)圖 關(guān)注 下載文章
綁定賬號(hào)成功
后續(xù)可登錄賬號(hào)暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服