{"id":175,"date":"2011-03-07T07:03:43","date_gmt":"2011-03-07T06:03:43","guid":{"rendered":"http:\/\/mod16.org\/hurfdurf\/?p=175"},"modified":"2011-03-07T09:07:35","modified_gmt":"2011-03-07T08:07:35","slug":"a-lengthy-treatise-on-the-relationship-between-filesize-and-perceived-video-quality-the-real-version","status":"publish","type":"post","link":"https:\/\/mod16.org\/hurfdurf\/?p=175","title":{"rendered":"A lengthy treatise on the relationship between filesize and perceived video quality: the real version"},"content":{"rendered":"<p>I posted a <a href=\"http:\/\/www.ggkthx.org\/2011\/03\/06\/a-lengthy-treatise-on-the-relationship-between-filesize-and-perceived-video-quality\/\">troll post<\/a> on ggkthx.org since people had been hurfing durf about filesizes more than usual lately. Then I thought maybe there is someone out there who actually would like to be educated on this for real, and I&#8217;m very bored right now, so here goes nothing. (Note: this post is not for encoders as most of the other posts here; it&#8217;s intended for people with no prior encoding experience.)<\/p>\n<p>When you are encoding video with x264 (or actually with any lossy encoder, audio or video), there are two basic ways to choose how big the resulting file will be: either you pick an average size (in bits) per second, or you pick an average quality (given by some arbitrary metric specified by the encoder). The former lets you predict how big the file will be, since you know how long the file is and how many bytes per second the encoder can use on average, while the latter lets you predict the average quality but not the filesize, since there&#8217;s no easy way to predict in advance how much space the encoder needs to achieve the desired quality.<\/p>\n<p>Now, the thing here is that different source materials are easier than others to compress. Let&#8217;s explain this by example: one of the simplest possible compression algorithms is a run-length encoder (RLE). Basically, it works by taking a string like AAAAAABBBCCCC and compressing it to A6 B3 C4; i.e. the character followed by a number that says how many times that character should be repeated. Thus, a long string of a hundred A&#8217;s will compress into A100, which is just four characters compared to the original 100, while a string that consists of the entire alphabet in sequence will not be compressed at all (assuming the encoder optimizes the &#8220;letter appears once&#8221; case to omitting the number afterwards; otherwise the output would actually be twice as big as the input).<\/p>\n<p>All compression algorithms, lossy or not, have the same property: some inputs are easier to compress than others. For video compressors, static &#8220;talking heads&#8221; scenes without much motion will generally compress a lot better than high-motion fight scenes. Big flat areas of the same color will also compress a lot better than images full of small details.<\/p>\n<p>Back in the day pretty much all fansub encoders used to use the average bitrate approach to encoding, and make all episodes fit in evenly on a CD or DVD-R. These days, not many people uses that kind of backup anymore (and recordable media is so cheap anyway that you don&#8217;t really care about wasting a few hundred MB on a DVD-R), so most people have shifted over to the &#8220;average quality&#8221; mode of encoding. Thus, one episode of a given anime series may turn out at half the filesize of another, despite the average quality being the same (average quality as defined by x264&#8217;s metric; it does not necessarily match human perception perfectly).<\/p>\n<p>In other words: in general, there is no relationship between filesize and perceived quality. Some series or episodes will look perfectly acceptable at less than 100MB per episode at 480p (see: Togainu no Chi). Others require &gt;500MB per episode at 720p to achieve roughly the same perceived quality. Now shut the fuck up about filesizes and go get a better internet connection, fag (hint: Canada is a third world country).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I posted a troll post on ggkthx.org since people had been hurfing durf about filesizes more than usual lately. Then I thought maybe there is someone out there who actually would like to be educated on this for real, and I&#8217;m very bored right now, so here goes nothing. (Note: this post is not for [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-175","post","type-post","status-publish","format-standard","hentry","category-encoding"],"_links":{"self":[{"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/posts\/175","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=175"}],"version-history":[{"count":2,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/posts\/175\/revisions"}],"predecessor-version":[{"id":177,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=\/wp\/v2\/posts\/175\/revisions\/177"}],"wp:attachment":[{"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=175"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=175"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mod16.org\/hurfdurf\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=175"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}