幫助中心 | 我的帳號 | 關於我們

電腦視覺與模式識別中的資訊理論方法(全彩英文版香農信息科學經典)

  • 作者:(西)弗朗西斯科·埃斯科拉諾·魯伊斯//巴勃羅·蘇奧·佩雷斯//博揚·伊萬諾夫·博涅夫|責編:陳亮//劉葉青
  • 出版社:世圖出版公司
  • ISBN:9787519296988
  • 出版日期:2023/01/01
  • 裝幀:平裝
  • 頁數:355
人民幣:RMB 159 元      售價:
放入購物車
加入收藏夾

內容大鋼
    資訊理論已被證明可以有效地解決許多電腦視覺和模式識別(CVPR)問題,如圖像匹配、聚類和分割、顯著性檢測、特徵選擇、最優分類器設計等。如今,研究人員正在將信息理論的元素廣泛引入CVPR領域,其中包括測度(熵、交互信息)、原理(最大熵、極大極小熵)和理論(速率失真理論、類型法)等。本書通過增量複雜性方法探索和介紹了資訊理論的元素,同時也提出了CVPR問題的形成和和最具代表性的演算法。當應用於不同的問題時,作者會突出信息理論原理之間的有趣關聯,尋求一個全面的研究路線圖。本書的研究結果為CVPR和機器學習。

作者介紹
(西)弗朗西斯科·埃斯科拉諾·魯伊斯//巴勃羅·蘇奧·佩雷斯//博揚·伊萬諾夫·博涅夫|責編:陳亮//劉葉青

目錄
1  Introduction
  1.1  Measures, Principles, Theories, and More
  1.2  Detailed Organization of the Book
  1.3  The ITinCVPR Roadmap
2  Interest Points, Edges, and Contour Grouping
  2.1  Introduction
  2.2  Entropy and Interest Points
    2.2.1  Kadir and Brady Scale Saliency Detector
    2.2.2  Point Filtering by Entropy Analysis Through Scale Space
    2.2.3  Chernoff Information and Optimal Filtering
    2.2.4  Bayesian Filtering of the Scale Saliency Feature Extractor: The Algorithm
  2.3  Information Theory as Evaluation Tool: The Statistical Edge Detection Case
    2.3.1  Statistical Edge Detection
    2.3.2  Edge Localization
  2.4  Finding Contours Among Clutter
    2.4.1  Problem Statement
    2.4.2  A* Road Tracking
    2.4.3  A* Convergence Proof
  2.5  Junction Detection and Grouping
    2.5.1  Junction Detection
    2.5.2  Connecting and Filtering Junctions Problems
  2.6  Key References
3  Contour and Region-Based Image Segmentation
  3.1  Introduction
  3.2  Discriminative Segmentation with Jensen-Shannon Divergence
    3.2.1  The Active Polygons Functional
    3.2.2  Jensen-Shannon Divergence and the Speed Function
  3.3  MDL in Contour-Based Segmentation
    3.3.1  B-Spline Parameterization of Contours
    3.3.2  MDL for B-Spline Parameterization
    3.3.3  MDL Contour-based Segmentation
  3.4  Model Order Selection in Region-Based Segmentation
    3.4.1  Jump-Diffusion for Optimal Segmentation
    3.4.2  Speeding-up the Jump-Diffusion Process
    3.4.3  K-adventurers Algorithm
  3.5  Model-Based Segmentation Exploiting The Maximum Entropy Principle
    3.5.1  Maximum Entropy and Markov Random Fields
    3.5.2  Efficient Learning with Belief Propagation
  3.6  Integrating Segmentation, Detection and Recognition
    3.6.1  Image Parsing
    3.6.2  The Data-Driven Generative Model
    3.6.3  The Power of Discriminative Processes
    3.6.4  The Usefulness of Combining Generative and Discriminative
  Problems
  3.7  Key References
4  Registration, Matching, and Recognition
  4.1  Introduction
  4.2  Image Alignment and Mutual Information
    4.2.1  Alignment and Image Statistics
    4.2.2  Entropy Estimation with Parzen's Windows

    4.2.3  The EMMA Algorithm
    4.2.4  Solving the Histogram-Binning Problem
  4.3  Alternative Metrics for Image Alignment
    4.3.1  Normalizing Mutual Information
    4.3.2  Conditional Entropies
    4.3.3  Extension to the Multimodal Case
    4.3.4  Aifme Alignment of Multiple Images
    4.3.5  The Renyi Entropy
    4.3.6  RSnyi's Entropy and Entropic Spanning Graphs
    4.3.7  The Jensen-Renyi Divergence and Its Applications
    4.3.8  Other Measures Related to R~nyi Entropy
    4.3.9  Experimental Results
  4.4  Deformable Matching with Jensen Divergence and Fisher Information
    4.4.1  The Distributional Shape Model
    4.4.2  Multiple Registration and Jensen-Shannon Divergence
    4.4.3  Information Geometry and Fisher-Rao Information
    4.4.4  Dynamics of the Fisher Information Metric
  4.5  Structural Learning with MDL
    4.5.1  The Usefulness of Shock Trees
    4.5.2  A Generative Tree Model Based on Mixtures
    4.5.3  Learning the Mixture
    4.5.4  Tree Edit-Distance and MDL
  Problems
  4.6  Key References
5  Image and Pattern Clustering
  5.1  Introduction
  5.2  Gaussian Mixtures and Model Selection
    5.2.1  Gaussian Mixtures Methods
    5.2.2  Defining Ganssian Mixtures
    5.2.3  EM Algorithm and Its Drawbacks
    5.2.4  Model Order Selection
  5.3  EBEM Algorithm: Exploiting Entropic Graphs
    5.3.1  The Gaussianity Criterion and Entropy Estimation
    5.3.2  Shannon Entropy from R~nyi Entropy Estimation
    5.3.3  Minimum Description Length for EBEM
    5.3.4  Kernel-Splitting Equations
    5.3.5  Experiments
  5.4  Information Bottleneck and Rate Distortion Theory
    5.4.1  Rate Distortion Theory Based Clustering
    5.4.2  The Information Bottleneck Principle
  5.5  Agglomerative IB Clustering
    5.5.1  Jensen-Shannon Divergence and Bayesian Classification Error
    5.5.2  The AIB Algorithm
    5.5.3  Unsupervised Clustering of Images
  5.6  Robust Information Clustering
  5.7  IT-Based Mean Shift
    5.7.1  The Mean Shift Algorithm
    5.7.2  Mean Shift Stop Criterion and Examples
    5.7.3  R~nyi Quadratic and Cross Entropy from Parzen Windows
    5.7.4  Mean Shift from an IT Perspective

  5.8  Unsupervised Classification and Clustering Ensembles
    5.8.1  Representation of Multiple Partitions
    5.8.2  Consensus Functions
  Problems
  5.9  Key References
6  Feature Selection and Transformation
  6.1  Introduction
  6.2  Wrapper and the Cross Validation Criterion
    6.2.1  Wrapper for Classifier Evaluation
    6.2.2  Cross Validation
    6.2.3  Image Classification Example
    6.2.4  Experiments
  6.3  Filters Based on Mutual Information
    6.3.1  Criteria for Filter Feature Selection
    6.3.2  Mutual Information for Feature Selection
    6.3.3  Individual Features Evaluation, Dependence and Redundancy
    6.3.4  The min-Redundancy Max-Relevance Criterion
    6.3.5  The Max-Dependency Criterion
    6.3.6  Limitations of the Greedy Search
    6.3.7  Greedy Backward Search
    6.3.8  Markov Blankets for Feature Selection
    6.3.9  Applications and Experiments
  6.4  Minimax Feature Selection for Generative Models
    6.4.1  Filters and the Maximum Entropy Principle
    6.4.2  Filter Pursuit through Minimax Entropy
  6.5  From PCA to gPCA
    6.5.1  PCA, FastICA, and Infomax
    6.5.2  Minimax Mutual Information ICA
    6.5.3  Generalized PCA (gPCA) and Effective Dimension
  Problems
  6.6  Key References
7  Classifier Design
  7.1  Introduction
  7.2  Model-Based Decision Trees
    7.2.1  Reviewing Information Gain
    7.2.2  The Global Criterion
    7.2.3  Rare Classes with the Greedy Approach
    7.2.4  Rare Classes with Global Optimization
  7.3  Shape Quantization and Multiple Randomized Trees
    7.3.1  Simple Tags and Their Arrangements
    7.3.2  Algorithm for the Simple Tree
    7.3.3  More Complex Tags and Arrangements
    7.3.4  Randomizing and Multiple Trees
  7.4  Random Forests
    7.4.1  The Basic Concept
    7.4.2  The Generalization Error of the RF Ensemble
    7.4.3  Out-of-the-Bag Estimates of the Error Bound
    7.4.4  Variable Selection: Forest RI vs. Forest-RC
  7.5  Infomax and Jensen-Shannon Boosting
    7.5.1  The Infomax Boosting Algorithm

    7.5.2  Jensen-Shannon Boosting
  7.6  Maximum Entropy Principle for Classification
    7.6.1  Improved Iterative Scaling
    7.6.2  Maximum Entropy and Information Projection
  7.7  Bregman Divergences and Classification
    7.7.1  Concept and Properties
    7.7.2  Bregman Balls and Core Vector Machines
    7.7.3  Unifying Classification: Bregman Divergences and Surrogates
  Problems
  7.8  Key References
References
Index
Color Plates

  • 商品搜索:
  • | 高級搜索
首頁新手上路客服中心關於我們聯絡我們Top↑
Copyrightc 1999~2008 美商天龍國際圖書股份有限公司 臺灣分公司. All rights reserved.
營業地址:臺北市中正區重慶南路一段103號1F 105號1F-2F
讀者服務部電話:02-2381-2033 02-2381-1863 時間:週一-週五 10:00-17:00
 服務信箱:bookuu@69book.com 客戶、意見信箱:cs@69book.com
ICP證:浙B2-20060032