<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>recall Archives - Creatronix</title>
	<atom:link href="https://creatronix.de/tag/recall/feed/" rel="self" type="application/rss+xml" />
	<link>https://creatronix.de/tag/recall/</link>
	<description>My adventures in code &#38; business</description>
	<lastBuildDate>Sun, 05 Jan 2025 10:25:03 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Classification: Precision and Recall</title>
		<link>https://creatronix.de/classification-precision-and-recall/</link>
		
		<dc:creator><![CDATA[Jörn]]></dc:creator>
		<pubDate>Thu, 28 Jun 2018 15:10:00 +0000</pubDate>
				<category><![CDATA[Data Science & SQL]]></category>
		<category><![CDATA[accuracy]]></category>
		<category><![CDATA[cat]]></category>
		<category><![CDATA[data science]]></category>
		<category><![CDATA[dog]]></category>
		<category><![CDATA[f1 score]]></category>
		<category><![CDATA[false positive]]></category>
		<category><![CDATA[precision]]></category>
		<category><![CDATA[recall]]></category>
		<category><![CDATA[true positive]]></category>
		<guid isPermaLink="false">http://creatronix.de/?p=1649</guid>

					<description><![CDATA[<p>In the realms of Data Science you&#8217;ll encounter sooner or the later the terms &#8220;Precision&#8221; and &#8220;Recall&#8221;. But what do they mean? Clarification Living together with little kids You very often run into classification issues: My daughter really likes dogs, so seeing a dog is something positive. When she sees a normal dog e.g. a&#8230;</p>
<p>The post <a href="https://creatronix.de/classification-precision-and-recall/">Classification: Precision and Recall</a> appeared first on <a href="https://creatronix.de">Creatronix</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In the realms of Data Science you&#8217;ll encounter sooner or the later the terms &#8220;Precision&#8221; and &#8220;Recall&#8221;. But what do they mean?</p>
<p><img decoding="async" id="im" src="https://i.imgflip.com/3opnpn.jpg" alt="Two Buttons Meme | Recall; Precision | image tagged in memes,two buttons | made w/ Imgflip meme maker" /></p>
<h2>Clarification</h2>
<p>Living together with little kids You very often run into classification issues:</p>
<p>My daughter really likes dogs, so seeing a dog is something positive. When she sees a normal dog e.g. a Labrador and proclaims: &#8220;Look, there is a dog!&#8221;</p>
<p>That&#8217;s a <strong>True Positive (TP)</strong></p>
<p>If she now sees a fat cat and proclaims: &#8220;Look at the dog!&#8221; we call it a <strong>False Positive (FP)</strong>, because her assumption of a positive outcome (a dog!) was false. A false positive is also called a Type 1 error</p>
<p>If I point at a small dog e.g. a Chihuahua and say &#8220;Look at the dog!&#8221; and she cries: &#8220;This is not a dog!&#8221; but indeed it is one, we call that a <strong>False negatives (FN) </strong>A false negativeis also called a Type 2 error</p>
<p>And last but not least, if I show her a bird and we agree on the bird not being a dog we have a <strong>True Negative (TN)</strong></p>
<p>This neat little matrix shows all of them in context:<br />
<img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-1669" src="https://creatronix.de/wp-content/uploads/2018/06/precision_and_recall.png" alt="" width="479" height="480" srcset="https://creatronix.de/wp-content/uploads/2018/06/precision_and_recall.png 479w, https://creatronix.de/wp-content/uploads/2018/06/precision_and_recall-150x150.png 150w, https://creatronix.de/wp-content/uploads/2018/06/precision_and_recall-300x300.png 300w, https://creatronix.de/wp-content/uploads/2018/06/precision_and_recall-100x100.png 100w" sizes="(max-width: 479px) 100vw, 479px" /></p>
<h2>Precision and Recall</h2>
<p>If I show my daughter twenty pictures of cats and dogs (8 cat pictures and 12 dog pictures) and she identifies 10 as dogs but out of ten dogs there are actually 2 cats her precision is 8 / (8+2) = 4/5 or 80%.</p>
<p><strong>Precision = <span style="color: #ff0000;">TP</span> / (<span style="color: #339966;">TP + FP</span>)</strong></p>
<p><img decoding="async" class="alignnone size-full wp-image-1672" src="https://creatronix.de/wp-content/uploads/2018/06/precision.png" alt="" width="479" height="480" srcset="https://creatronix.de/wp-content/uploads/2018/06/precision.png 479w, https://creatronix.de/wp-content/uploads/2018/06/precision-150x150.png 150w, https://creatronix.de/wp-content/uploads/2018/06/precision-300x300.png 300w, https://creatronix.de/wp-content/uploads/2018/06/precision-100x100.png 100w" sizes="(max-width: 479px) 100vw, 479px" /></p>
<p>Knowing that there are actually 12 dog pictures and she misses 4 (false negatives) her recall is 8 / (8+4) = 2/3 or roughly 67%</p>
<p><strong>Recall = <span style="color: #ff0000;">TP</span> / (<span style="color: #339966;">TP + FN</span>)</strong></p>
<p><img decoding="async" class="alignnone size-full wp-image-1673" src="https://creatronix.de/wp-content/uploads/2018/06/recall.png" alt="" width="479" height="480" srcset="https://creatronix.de/wp-content/uploads/2018/06/recall.png 479w, https://creatronix.de/wp-content/uploads/2018/06/recall-150x150.png 150w, https://creatronix.de/wp-content/uploads/2018/06/recall-300x300.png 300w, https://creatronix.de/wp-content/uploads/2018/06/recall-100x100.png 100w" sizes="(max-width: 479px) 100vw, 479px" /></p>
<p>Which measure is more important?</p>
<p>It depends:</p>
<p>If you&#8217;re a dog lover it is better to have a high precision, when you are afraid of dogs say to avoid dogs, a higher recall is better 🙂</p>
<h3>Different terms</h3>
<p>Precision is also called <strong>Positive Predictive Value (PPV)</strong></p>
<p>Recall often is also called</p>
<ul>
<li>True positive rate</li>
<li>Sensitivity</li>
<li>Probability of detection</li>
</ul>
<h2>Other interesting measures</h2>
<h2>Accuracy</h2>
<p><strong>ACC = (<span style="color: #ff0000;">TP + TN</span>) / (<span style="color: #339966;">TP + FP + TN + FN</span>)</strong></p>
<p><img decoding="async" class="alignnone size-full wp-image-1674" src="https://creatronix.de/wp-content/uploads/2018/06/accuracy.png" alt="" width="479" height="480" srcset="https://creatronix.de/wp-content/uploads/2018/06/accuracy.png 479w, https://creatronix.de/wp-content/uploads/2018/06/accuracy-150x150.png 150w, https://creatronix.de/wp-content/uploads/2018/06/accuracy-300x300.png 300w, https://creatronix.de/wp-content/uploads/2018/06/accuracy-100x100.png 100w" sizes="(max-width: 479px) 100vw, 479px" /></p>
<h3>F1-Score</h3>
<p>You can combine Precision and Recall to a measure called F1-Score. It is the harmonic mean of precision and recall</p>
<p><strong>F1 = 2 / (1/Precision + 1/Recall)</strong></p>
<h3>Scikit-Learn</h3>
<p>scikit-learn being a one-stop-shop for data scientists does of course offer functions for calculating precision and recall:</p>
<pre>from sklearn.metrics import precision_score

y_true = ["dog", "dog", "not-a-dog", "not-a-dog", "dog", "dog"]
y_pred = ["dog", "not-a-dog", "dog", "not-a-dog", "dog", "not-a-dog"]

print(precision_score(y_true, y_predicted , pos_label="dog"))</pre>
<p>Let&#8217;s assume we trained a binary classifier which can tell us &#8220;dog&#8221; or &#8220;not-a-dog&#8221;</p>
<p>In this example the precision is 0.666 or ~67% because in two third of the cases the algorithm was right when it predicted a dog</p>
<pre>from sklearn.metrics import recall_score

print(recall_score(y_true, y_pred, pos_label="dog"))</pre>
<p>The recall was just 0.5 or 50% because out of 4 dogs it just identified 2  correctly as dogs.</p>
<pre>from sklearn.metrics import accuracy_score

print(accuracy_score(y_true, y_pred))</pre>
<p>The accuracy was also just 50% because out of 6 items it made only 3 correct predictions.</p>
<pre>from sklearn.metrics import f1_score

print(f1_score(y_true, y_pred, pos_label="dog"))</pre>
<p>The F1 score is 0.57 &#8211; just between 0.5 and 0.666.</p>
<p>What other scores do you encounter? &#8211; stay tuned for the next episode 🙂</p>
<p>&nbsp;</p>
<p>The post <a href="https://creatronix.de/classification-precision-and-recall/">Classification: Precision and Recall</a> appeared first on <a href="https://creatronix.de">Creatronix</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
