{"id":758,"date":"2015-01-21T06:27:11","date_gmt":"2015-01-21T05:27:11","guid":{"rendered":"http:\/\/eyesofthings.eu\/?p=758"},"modified":"2015-01-21T06:27:11","modified_gmt":"2015-01-21T05:27:11","slug":"from-feature-descriptors-to-deep-learning-20-years-of-computer-vision","status":"publish","type":"post","link":"https:\/\/eyesofthings.eu\/?p=758","title":{"rendered":"From feature descriptors to deep learning: 20 years of computer vision"},"content":{"rendered":"<p>From feature descriptors to deep learning: 20 years of computer vision http:\/\/quantombone.blogspot.ie\/2015\/01\/from-feature-descriptors-to-deep.html<\/p>\n<div class=\"p1\">We all know that deep convolutional neural networks have produced some stellar results on object detection and recognition benchmarks in the past two years (2012-2014), so you might wonder: <i>what did the earlier object recognition techniques look like<\/i>? <i>How do the designs of earlier recognition systems relate to the modern multi-layer convolution-based framework<\/i>?<\/p>\n<p>Let&#8217;s take a look at some of the big ideas in Computer Vision from the last 20 years.<\/p>\n<div><\/div>\n<p><span class=\"s1\"><b>The rise of the local feature descriptors: ~1995 to ~2000<\/b><\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">When <b>SIFT<\/b> (an acronym for Scale Invariant Feature Transform) was introduced by <b>David Lowe<\/b> in 1999, the world of computer vision research changed almost overnight. It was robust solution to the problem of comparing image patches. Before SIFT entered the game, people were just using SSD (sum of squared distances) to compare patches and not giving it much thought.<\/span><\/div>\n<div class=\"separator\"><a href=\"http:\/\/3.bp.blogspot.com\/-2Lw3DxApZrw\/VKBKMeAwTnI\/AAAAAAAANyU\/7IQxfszsclc\/s1600\/sift_pic.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/3.bp.blogspot.com\/-2Lw3DxApZrw\/VKBKMeAwTnI\/AAAAAAAANyU\/7IQxfszsclc\/s1600\/sift_pic.png\" alt=\"\" width=\"320\" height=\"149\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">The SIFT recipe: gradient orientations, normalization tricks<\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\">SIFT is something called a local feature descriptor &#8212; it is one of those research findings which is the result of one ambitious man hackplaying with pixels for more than a decade. \u00a0Lowe and the University of British Columbia got a patent on SIFT and<i>Lowe released a nice compiled binary of his very own SIFT implementation for researchers to use in their work<\/i>. \u00a0SIFT allows a point inside an RGB imagine to be represented robustly by a low dimensional vector.\u00a0 When you take multiple images of the same physical object while rotating the camera, the SIFT descriptors of corresponding points are very similar in their 128-D space.\u00a0 At first glance it seems silly that you need to do something as complex as SIFT, but believe me: just because you, a human, can look at two image patches and quickly \u00abunderstand\u00bb that they belong to the same physical point, this is not the same for machines.\u00a0 SIFT had massive implications for the geometric side of computer vision (stereo, Structure from Motion, etc) and later became the basis for the popular Bag of Words model for object recognition.<\/span><br \/>\n<span class=\"s1\"><br \/>\n<\/span>Seeing a technique like SIFT dramatically outperform an alternative method like Sum-of-Squared-Distances (SSD) Image Patch Matching firsthand is an important step in every aspiring vision scientist&#8217;s career. And SIFT isn&#8217;t just a vector of filter bank responses, the binning and normalization steps are very important. It is also worthwhile noting that while SIFT was initially (in its published form) applied to the output of an interest point detector, later it was found that the interest point detection step was not important in categorization problems. \u00a0For categorization, researchers eventually moved towards vector quantized SIFT applied densely across an image.<\/p>\n<p>I should also mention that other descriptors such as <b>Spin Images<\/b> (see my <a href=\"http:\/\/quantombone.blogspot.com\/2009\/07\/spin-images-for-object-recognition-in.html\">2009 blog post on spin images<\/a>) came out a little bit earlier than SIFT, but because Spin Images were solely applicable to 2.5D data, this feature&#8217;s impact wasn&#8217;t as great as that of SIFT.<\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\"><b>The modern dataset (aka the hardening of vision as science): ~2000 to ~2005<\/b><\/span><\/div>\n<div class=\"p1\">Homography estimation, ground-plane estimation, robotic vision, SfM, and all other geometric problems in vision greatly benefited from robust image features such as SIFT. \u00a0But towards the end of the 1990s, it was clear that <i>the internet was the next big thing<\/i>. \u00a0Images were going online. Datasets were being created. \u00a0And no longer was the current generation solely interested in structure recovery (aka geometric) problems. \u00a0This was the beginning of the large-scale dataset era with<a href=\"http:\/\/www.vision.caltech.edu\/Image_Datasets\/Caltech101\/\">Caltech-101<\/a> slowly gaining popularity and categorization research on the rise. No longer were researchers evaluating their own algorithms on their own in-house datasets &#8212; we now had a more objective and standard way to determine if yours is bigger than mine. \u00a0Even though Caltech-101 is considered outdated by 2015 standards, it is fair to think of this dataset as the Grandfather of the more modern ImageNet dataset. Thanks <a href=\"http:\/\/vision.stanford.edu\/feifeili\/\">Fei-Fei Li<\/a>.<\/p>\n<div class=\"separator\"><a href=\"http:\/\/www.vision.caltech.edu\/Image_Datasets\/Caltech101\/\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/3.bp.blogspot.com\/-GotW1sXXx_4\/VKFVGQ62vRI\/AAAAAAAANzU\/y5S_qZKAoG4\/s1600\/caltech101.jpg\" alt=\"\" width=\"320\" height=\"270\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">Category-based datasets: the infamous Caltech-101 TorralbaArt image<\/div>\n<p><b>Bins, Grids, and Visual Words (aka Machine Learning meets descriptors): ~2000 to ~2005<\/b><br \/>\n<span class=\"s1\">After the community shifted towards more ambitious object recognition problems and away from geometry recovery problems, we had a flurry of research in Bag of Words, Spatial Pyramids, Vector Quantization, as well as machine learning tools used in any and all stages of the computer vision pipeline. \u00a0Raw SIFT was great for wide-baseline stereo, but it wasn&#8217;t powerful enough to provide matches between two distinct object instances from the same visual object category. \u00a0What was needed was a way to encode the following ideas: object parts can deform relative to each other and some image patches can be missing. \u00a0Overall, a much more <i>statistical way to characterize objects was needed<\/i>.<\/span><br \/>\n<span class=\"s1\"><br \/>\n<\/span><span class=\"s1\">Visual Words were introduced by Josef Sivic and Andrew Zisserman in approximately 2003 and this was a clever way of taking algorithms from large-scale text matching and applying them to visual content. \u00a0A visual dictionary can be obtained by performing unsupervised learning (basically just K-means) on SIFT descriptors which maps these 128-D real-valued vectors into integers (which are cluster center assignments). \u00a0A histogram of these visual words is a fairly robust way to represent images. \u00a0Variants of the Bag of Words model are still heavily utilized in vision research.<\/span><\/p>\n<div class=\"separator\"><a href=\"http:\/\/3.bp.blogspot.com\/-pfeV3FAW_fA\/VKFaZcYaxjI\/AAAAAAAANzk\/cMErRKX7rAA\/s1600\/lola.png\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/3.bp.blogspot.com\/-pfeV3FAW_fA\/VKFaZcYaxjI\/AAAAAAAANzk\/cMErRKX7rAA\/s1600\/lola.png\" alt=\"\" width=\"320\" height=\"121\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">Josef Sivic&#8217;s \u00abVideo Google\u00bb: Matching Graffiti inside the Run Lola Run video<\/div>\n<p><span class=\"s1\"><br \/>\n<\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">Another idea which was gaining traction at the time was the idea of using some sort of binning structure for matching objects. \u00a0Caltech-101 images mostly contained objects, so these grids were initially placed around entire images, and later on they would be placed around object bounding boxes. \u00a0Here is a picture from Kristen Grauman&#8217;s famous <a href=\"http:\/\/www.cs.utexas.edu\/~grauman\/research\/projects\/pmk\/pmk_projectpage.htm\">Pyramid Match Kernel<\/a>paper which introduced a powerful and hierarchical way of integrating spatial information into the image matching process.<\/span><br \/>\n<span class=\"s1\"><br \/>\n<\/span><\/div>\n<div class=\"separator\"><a href=\"http:\/\/3.bp.blogspot.com\/-5aTdQ2Py6ak\/VKBO33A5xII\/AAAAAAAANyg\/x9TWuramoKw\/s1600\/pmk.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/3.bp.blogspot.com\/-5aTdQ2Py6ak\/VKBO33A5xII\/AAAAAAAANyg\/x9TWuramoKw\/s1600\/pmk.jpg\" alt=\"\" width=\"320\" height=\"213\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">Grauman&#8217;s Pyramid Match Kernel for Improved Image Matching<\/div>\n<div class=\"p1\"><span class=\"s1\">\u00a0<\/span><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\">At some point it was not clear whether researchers should focus on better features, better comparison metrics, or better learning. \u00a0In the mid 2000s it wasn&#8217;t clear if young PhD students should spend more time concocting new descriptors or kernelizing their support vector machines to death.<\/span><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\"><b>Object Templates (aka the reign of HOG and DPM): ~2005 to ~2010<\/b><\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">At around 2005, a young researcher named Navneet Dalal showed the world just what can be done with his own new badass feature descriptor, HOG. \u00a0(It is sometimes written as HoG, but because it is an acronym for \u201cHistogram of Oriented Gradients\u201d it should really be HOG. The confusion must have came from an earlier approach called DoG which stood for Difference of Gaussian, in which case the \u201co\u201d should definitely be lower case.)<\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">\u00a0<\/span><\/div>\n<div class=\"separator\"><a href=\"http:\/\/2.bp.blogspot.com\/-7BaGjkSq6rc\/VKBPTRYzNgI\/AAAAAAAANyo\/4VsIBTP-NVY\/s1600\/hog.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/2.bp.blogspot.com\/-7BaGjkSq6rc\/VKBPTRYzNgI\/AAAAAAAANyo\/4VsIBTP-NVY\/s1600\/hog.jpg\" alt=\"\" width=\"320\" height=\"90\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">Navneet Dalal&#8217;s HOG Descriptor<\/div>\n<p>&nbsp;<\/p>\n<div class=\"p1\"><span class=\"s1\">HOG came at the time when everybody was applying spatial binning to bags of words, using multiple layers of learning, and making their systems overly complicated. Dalal\u2019s ingenious descriptor was actually quite simple.\u00a0 The seminal HOG paper was published in 2005 by Navneet and his PhD advisor, Bill Triggs. Triggs got his fame from earlier work on geometric vision, and Dr. Dalal got his fame from his newly found descriptor.\u00a0 HOG was initially applied to the problem of pedestrian detection, and one of the reasons it because so popular was that the machine learning tool used on top of HOG was quite simple and well understood, it was the linear Support Vector Machine.<\/span><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\">I should point out that in 2008, a follow-up paper on object detection, which introduced a technique called the Deformable Parts-based Model (or DPM as we vision guys call it), helped reinforce the popularity and strength of the HOG technique. I personally jumped on the HOG bandwagon in about 2008.\u00a0 My first few years as a grad student (2005-2008) I was hackplaying with my own vector quantized filter bank responses, and definitely developed some strong intuition regarding features. \u00a0In the end I realized that my own features were only \u00abokay,\u00bb and because I was applying them to the outputs of image segmentation algorithms they were extremely slow.\u00a0 Once I started using HOG, it didn\u2019t take me long to realize there was no going back to custom, slow, features. \u00a0Once I started using a multiscale feature pyramid with a slightly improved version of HOG introduced by master hackers such as Ramanan and Felzenszwalb, I was processing images at 100x the speed of multiple segmentations + custom features (my earlier work).<\/span><\/div>\n<div class=\"separator\"><a href=\"http:\/\/1.bp.blogspot.com\/-9ZrYA5J3R3k\/VKBPu-uaCMI\/AAAAAAAANyw\/FyEgea8HL5o\/s1600\/dpm.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/1.bp.blogspot.com\/-9ZrYA5J3R3k\/VKBPu-uaCMI\/AAAAAAAANyw\/FyEgea8HL5o\/s1600\/dpm.jpg\" alt=\"\" width=\"200\" height=\"152\" border=\"0\" \/><\/a><\/div>\n<div class=\"separator\">The infamous Deformable Part-based Model (for a Person)<\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\">DPM was the reigning champ on the PASCAL VOC challenge, and one of the reasons why it became so popular was <i>the excellent MATLAB\/C++\u00a0implementation by Ramanan and Felzenszwalb<\/i>.\u00a0 I still know many researchers who never fully acknowledged what releasing such great code really meant for the fresh generation of incoming PhD students, but at some point it seems like everybody was modifying the DPM codebase for their own CVPR attempts.\u00a0 Too many incoming students were lacking solid software engineering skills and giving them the DPM code was a surefire way to get some some experiments up and running.\u00a0 Personally, I never jumped on the parts-based methodology, but I did take apart the DPM codebase several times.\u00a0 However, when I put it back together, the <a href=\"http:\/\/www.cs.cmu.edu\/~tmalisie\/projects\/iccv11\/\">Exemplar-SVM<\/a> was the result.<\/span><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\"><b>Big data, Convolutional Neural Networks and the promise of Deep Learning: ~2010 to ~2015<\/b><\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">Sometime around 2008, it was pretty clear that scientists were getting more and more comfortable with large datasets.\u00a0 It wasn\u2019t just the rise of \u201cCloud Computing\u201d and \u201cBig Data,\u201d it was the rise of the data scientists.\u00a0 Hacking on equations by morning, developing a prototype during lunch, deploying large scale computations in the evening, and integrating the findings into a production system by sunset.\u00a0 I spent two summers at Google Research, I saw lots of guys who had made their fame as vision hackers.\u00a0 But they weren\u2019t just writing \u201cacademic\u201d papers at Google &#8212; sharding datasets with one hand, compiling results for their managers, writing Borg scripts in their sleep, and piping results into gnuplot (because Jedis don\u2019t need GUIs?). It was pretty clear that big data, and a DevOps mentality was here to stay, and the vision researcher of tomorrow would be quite comfortable with large datasets. \u00a0No longer did you need one guy with a mathy PhD, one software engineer, one manager, and one tester.\u00a0 Plenty of guys who could do all of those jobs.<\/span><\/div>\n<div class=\"p2\"><\/div>\n<div class=\"p1\"><span class=\"s1\"><b>Deep Learning: 1980s &#8211; 2015<\/b><\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">2014 was definitely a big year for Deep Learning.\u00a0 What\u2019s interesting about Deep Learning is that it is a very old technique. \u00a0What we&#8217;re seeing now is essentially the Neural Network 2.0 revolution &#8212; but this time around, there&#8217;s we&#8217;re 20 years ahead R&amp;D-wise and our computers are orders of magnitude faster. \u00a0And what\u2019s funny is that the same guys that were championing such techniques in the early 90s were the same guys we were laughing at in the late 90s (because clearly convex methods were superior to the magical NN learning-rate knobs). I guess they really had the last laugh because eventually these relentless neural network gurus became the same guys we now all look up to.\u00a0 <b>Geoffrey Hinton, Yann LeCun, Andrew Ng, and Yeshua Bengio are the 4 Titans of Deep Learning.<\/b>\u00a0 By now, just about everybody has jumped ship to become a champion of Deep Learning.<\/span><br \/>\n<span class=\"s1\"><br \/>\n<\/span><span class=\"s1\">But with Google, Facebook, Baidu, and a multitude of little startups riding the Deep Learning wave, <b>who will rise to the top as the master of artificial intelligence?<\/b><\/span><\/div>\n<div class=\"p1\"><span class=\"s1\">\u00a0<\/span><\/div>\n<div class=\"separator\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/www.cs.nyu.edu\/~yann\/research\/deep\/images\/ff1.gif\" alt=\"\" width=\"301\" height=\"320\" border=\"0\" \/><\/div>\n<div class=\"separator\"><a href=\"http:\/\/www.cs.nyu.edu\/~yann\/research\/deep\/\">Yann&#8217;s Deep Learning Page<\/a><\/div>\n<div class=\"p1\"><\/div>\n<div class=\"p2\"><b>How to today&#8217;s deep learning systems resemble the recognition systems of yesteryear?<\/b><\/div>\n<div class=\"p1\">Multiscale convolutional neural networks aren&#8217;t that much different than the feature-based systems of the past. \u00a0The first level neurons in deep learning systems learn to utilize gradients in a way that is similar to hand-crafted features such as SIFT and HOG. \u00a0Objects used to be found in a sliding-window fashion, but now it is easier and sexier to think of this operation as convolving an image with a filter. Some of the best detection systems used to use multiple linear SVMs, combined in some ad-hoc way, and now we are essentially using even more of such linear decision boundaries. \u00a0Deep learning systems can be thought of a multiple stages of applying linear operators and piping them through a non-linear activation function, but deep learning is more similar to a clever combination of linear SVMs than a memory-ish Kernel-based learning system.<\/p>\n<p>Features these days aren&#8217;t engineered by hand. \u00a0However, architectures of Deep systems are still being designed manually &#8212; and it looks like the experts are the best at this task. \u00a0The operations on the inside of both classic and modern recognition systems are still very much the same. \u00a0You still need to be clever to play in the game, but <i>now you need a big computer<\/i>. There&#8217;s still lot of room for improvement, so I encourage all of you to be creative in your research.<\/p>\n<p>Research-wise, it never hurts to know where we have been before so that we can better plan for our journey ahead. \u00a0I hope you enjoyed this brief history lesson and the next time you look for insights in your research, don&#8217;t be afraid to look back.<\/p><\/div>\n<div class=\"p1\"><\/div>\n<div class=\"p1\"><span class=\"s1\">To learn more about computer vision techniques:<\/span><\/div>\n<div class=\"p1\"><span class=\"s1\"><a href=\"http:\/\/en.wikipedia.org\/wiki\/Scale-invariant_feature_transform\">SIFT article on Wikipedia<\/a><\/span><\/div>\n<div class=\"p1\"><a href=\"http:\/\/en.wikipedia.org\/wiki\/Bag-of-words_model_in_computer_vision\">Bag of Words article on Wikipedia<\/a><\/div>\n<div class=\"p1\"><a href=\"http:\/\/en.wikipedia.org\/wiki\/Histogram_of_oriented_gradients\">HOG article on Wikipedia<\/a><br \/>\n<a href=\"http:\/\/www.cs.berkeley.edu\/~rbg\/latent\/\">Deformable Part-based Model Homepage<\/a><br \/>\n<a href=\"http:\/\/www.cs.utexas.edu\/~grauman\/research\/projects\/pmk\/pmk_projectpage.htm\">Pyramid Match Kernel Homepage<\/a><br \/>\n<a href=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/research\/vgoogle\/\">\u00abVideo Google\u00bb Image Retrieval System<\/a><\/div>\n<div class=\"p1\">\nSome Computer Vision datasets:<br \/>\n<a href=\"http:\/\/www.vision.caltech.edu\/Image_Datasets\/Caltech101\/\">Caltech-101 Dataset<\/a><br \/>\n<a href=\"http:\/\/www.image-net.org\/\">ImageNet Dataset<\/a><\/p>\n<\/div>\n<div class=\"p1\">To learn about the people mentioned in this article:<\/div>\n<div class=\"p1\"><a href=\"http:\/\/www.cs.utexas.edu\/~grauman\/\">Kristen Grauman<\/a>\u00a0(creator of Pyramid Match Kernel, Prof at Univ of Texas)<br \/>\n<a href=\"http:\/\/lear.inrialpes.fr\/people\/triggs\/\">Bill Triggs&#8217;s<\/a>\u00a0(co-creator of HOG, Researcher at INRIA)<br \/>\n<a href=\"https:\/\/sites.google.com\/site\/navneetdalal\/\">Navneet Dalal<\/a>\u00a0(co-creator of HOG, now at Google)<\/div>\n<div class=\"p1\"><a href=\"http:\/\/yann.lecun.com\/\">Yann LeCun<\/a>\u00a0(one of the Titans of Deep Learning, at NYU and Facebook)<\/div>\n<div class=\"p1\"><a href=\"http:\/\/www.cs.toronto.edu\/~hinton\/\">Geoffrey Hinton<\/a>\u00a0(one of the Titans of Deep Learning, at Univ of Toronto and Google)<br \/>\n<a href=\"http:\/\/cs.stanford.edu\/people\/ang\/\">Andrew Ng<\/a>\u00a0(leading the Deep Learning effort at Baidu, Prof at Stanford)<br \/>\n<a href=\"http:\/\/www.iro.umontreal.ca\/~bengioy\/yoshua_en\/index.html\">Yoshua Bengio<\/a>\u00a0(one of the Titans of Deep Learning, Prof at U Montreal)<\/div>\n<div class=\"p1\"><a href=\"http:\/\/www.ics.uci.edu\/~dramanan\/\">Deva Ramanan<\/a>\u00a0(one of the creators of DPM, Prof at UC Irvine)<br \/>\n<a href=\"http:\/\/cs.brown.edu\/~pff\/\">Pedro Felzenszwalb<\/a>\u00a0(one of the creators of DPM, Prof at Brown)<br \/>\n<a href=\"http:\/\/vision.stanford.edu\/feifeili\/\">Fei-fei Li<\/a>\u00a0(Caltech101 and ImageNet, Prof at Stanford)<br \/>\n<a href=\"http:\/\/www.di.ens.fr\/~josef\/\">Josef Sivic<\/a>\u00a0(Video Google and Visual Words, Researcher at INRIA\/ENS)<br \/>\n<a href=\"http:\/\/en.wikipedia.org\/wiki\/Andrew_Zisserman\">Andrew Zisserman<\/a>\u00a0(Geometry-based methods in vision, Prof at Oxford)<br \/>\n<a href=\"https:\/\/www-robotics.jpl.nasa.gov\/people\/Andrew_Johnson\/\">Andrew E. Johnson<\/a>\u00a0(SPIN Images creator, Researcher at JPL)<br \/>\n<a href=\"http:\/\/www.cs.cmu.edu\/~hebert\/\">Martial Hebert<\/a>\u00a0(Geometry-based methods in vision, Prof at CMU)<\/div>\n","protected":false},"excerpt":{"rendered":"<p>From feature descriptors to deep learning: 20 years of computer vision http:\/\/quantombone.blogspot.ie\/2015\/01\/from-feature-descriptors-to-deep.html We all know that deep convolutional neural networks have produced some stellar results on object detection and recognition benchmarks in the past two years (2012-2014), so you might&hellip; <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19],"tags":[],"class_list":["post-758","post","type-post","status-publish","format-standard","hentry","category-information"],"_links":{"self":[{"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/posts\/758","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=758"}],"version-history":[{"count":1,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/posts\/758\/revisions"}],"predecessor-version":[{"id":759,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=\/wp\/v2\/posts\/758\/revisions\/759"}],"wp:attachment":[{"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=758"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=758"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eyesofthings.eu\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=758"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}