首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 图书频道 > 教育科技 > 科学与自然 >

模式识别与神经网络(英文版)

2010-03-03 
基本信息·出版社:人民邮电出版社 ·页码:403 页 ·出版日期:2009年06月 ·ISBN:7115210640/9787115210647 ·条形码:9787115210647 ·版本:第1版 · ...
商家名称 信用等级 购买信息 订购本书
模式识别与神经网络(英文版) 去商家看看
模式识别与神经网络(英文版) 去商家看看

 模式识别与神经网络(英文版)


基本信息·出版社:人民邮电出版社
·页码:403 页
·出版日期:2009年06月
·ISBN:7115210640/9787115210647
·条形码:9787115210647
·版本:第1版
·装帧:平装
·开本:16
·正文语种:英语
·丛书名:图灵原版计算机科学系列
·外文书名:Pattern Recognition and Neural Networks

内容简介 《模式识别与神经网络(英文版)》是模式识别和神经网络方面的名著,讲述了模式识别所涉及的统计方法、神经网络和机器学习等分支。书的内容从介绍和例子开始,主要涵盖统计决策理论、线性判别分析、弹性判别分析、前馈神经网络、非参数方法、树结构分类、信念网、无监管方法、探寻优良的模式特性等方面的内容。
《模式识别与神经网络(英文版)》可作为统计与理工科研究生课程的教材,对模式识别和神经网络领域的研究人员也是极有价值的参考书。
作者简介 B.D.RipIey,著名的统计学家,牛津大学应用统计教授。他在空间统计学、模式识别领域作出了重要贡献,对S的开发以及s-Plus和R的推广应用有着重要影响。20世纪90年代他出版了我人工神经网络方面的著作,影响很大,引导统计学者开始关注机器学习和数据挖掘。除本书外,他还著有ModerrApplied Statistics with S和S Programming。
媒体推荐 “……模式分类和神经网络技术应用A-面的优秀教材……Ripley写了一本详尽、易懂的教材……这本书用简明的形式和迷人的风格介绍了统计模式识别和神经网络的数学理论,必将在该领域中广为流传。”
  ——《自然》
“这本书特别值得关注,是理论与实例的完美结合。”
  ——A.Gelman.国际统计学会杂志
“我极力推荐这本书,任何一位研究人员都可以领教Ripley的博学多才,并从书中给出的大量参考文献中获益匪浅。”
  ——DeeDenteneer。ITWNieuws
“对统计数据分析的原理与方法感兴趣的任何人都会从中受益……为未来数年的理论发展指明了方向。”
  ——StephenRoberts.《泰晤士报高等教育增刊》
编辑推荐 随着人工智能、信息检索和海量数据处理等技术的发展,模式识别成为了研究热点。在《模式识别与神经网络(英文版)》中,Riprley将模式识别领域中的统计方法和基于神经网络的机器学习这两个关键思想结令起来:以统计决策理;仑和计算学习理论为依据,建立了神经网络理论的坚实基础。在理论层面,《模式识别与神经网络(英文版)》强调概率与统计;在实践层面。则强调模式识别的实用方法。
《模式识别与神经网络(英文版)》已被国际知名大学采用为教材,对于研究模式识别和神经网络的专业人士,也是不可不读的优秀参考书。
目录
1 Introduction and Examples1
1.1 How do neural methods differ?4
1.2 The patterm recognition task5
1.3 Overview of the remaining chapters9
1.4 Examples10
1.5 Literature15

2 Statistical Decision Theory17
2.1 Bayes rules for known distributions18
2.2 Parametric models26
2.3 Logistic discrimination43
2.4 Predictive classification45
2.5Alternative estimation procedures55
2.6 How complex a model do we need?59
2.7 Performance assessment66
2.8 Computational learning approaches77

3 Linear DiscriminantAnalysis91
3.1 Classical linear discriminatio92
3.2 Linear discriminants via regression101
3.3 Robustness105
3.4 Shrinkage methods106
3.5 Logistic discrimination109
3.6 Linear separatio andperceptrons116

4 Flexible Diseriminants121
4.1 Fitting smooth parametric functions122
4.2 Radial basis functions131
4.3 Regularization136

5 Feed-forward Neural Networks143
5.1 Biological motivation145
5.2 Theory147
5.3 Learning algorithms148
5.4 Examples160
5.5 Bayesian perspectives163
5.6 Network complexity168
5.7Approximation results173

6 Non-parametric Methods181
6.1 Non-parametric estlmation of class densities181
6.2 Nearest neighbour methods191
6 3 Learning vector quantization201
6.4 Mixture representations207

7 Tree-structured Classifiers213
7.1 Splitting rules216
7.2 Pruning rules221
7.3 Missing values231
7.4 Earlier approaches235
7.5 Refinements237
7.6 Relationships to neural networks240
7.7 Bayesian trees241

8 Belief Networks243
8.1 Graphical models and networks246
8.2 Causal networks262
8 3 Learning the network structure275
8.4 Boltzmann machines279
8.5 Hierarchical mixtures of experts283

9 Unsupervised Methods287
9.1 Projection methods288
9.2 Multidimensional scaling305
9.3 Clustering algorithms311
9.4 Self-organizing maps322

10 Finding Good Pattern Features327
10.1 Bounds for the Bayes error328
10.2 Normal class distributions329
10.3 Branch-and-bound techniques330
10.4 Feature extraction331

A Statistical Sidelines333
A.1 Maximum likelihood and MAP estimation333
A.2 TheEMalgorithm334
A.3 Markov chain Monte Carlo337
A.4Axioms for dconditional indcpcndence339
A.5 Oprimization342

Glossary347
References355
Author Index391
Subject Index399

……
序言 Pattern recognition has a long and respectable history within engineer-ing, especially for military applications, but the cost of the hardwareboth to acquire the data (signals and images) and to compute theanswers made it for many years a rather specialist subject. Hardwareadvances have made the concerns of pattern recognition of much widerapplicability. In essence it covers the following problem:
'Given some examples of complex signals and the correct
decisions for them, make decisions automatically for a stream
of future examples.'There are many examples from everyday life:
Name the species of a flowering plant.
Grade bacon rashers from a visual image.
Classify an X-ray image of a tumour as cancerous or benign.
Decide to buy or sell a stock option.
Give or refuse credit to a shopper.
Many of these are currently performed by human experts, but it isincreasingly becoming feasible to design automated systems to replacethe expert and either perform better (as in credit scoring) or 'clone' theexpert (as in aids to medical diagnosis).
Neural networks have arisen from analogies with models of the waythat humans might approach pattern recognition tasks, although theyhave developed a long way from the biological roots. Great claims havebeen made for these procedures, and although few of these claims havewithstood careful scrutiny, neural network methods have had greatimpact on pattern recognition practice. A theoretical understanding ofhow they work is still under construction, and is attempted here byviewing neural networks within a statistical framework, together withmethods developed in the field of machine learning.
One of the aims of this book is to be a reference resource, so almostall the results used are proved (and the remainder are given referencesto complete proofs). The proofs are often original.
文摘 插图:


The calculations here are from Hjort (1986); versions of these for-mulae are given by Aitchison & Dunsmore (1975) (up to the differencesin the meaning of their multivariate t) and Geisser (1993). This ap-proach is originally due to Geisser (1964, 1966).
The differences between the predictive and plug-in approaches willbe small or zero for roughly equally prevalent classes. In other cases,for example screening for rare diseases or when very few data areavailable, the differences can be dramatic as shown by the examples inAitchison & Dunsmore (1975, 11.5-11.6). The latter do have groupswith nk only slightly greater than p, for example p = 8 and n2 = 11when fitting a covariance matrix to each class, which would be seenas over-fitting in the plug-in approach. (Indeed, one might choose notto use all the variables, or perhaps to restrict the class of covariancematrices considered.)
Aitchison et al. (1977) conducted a small-sample simulation compar-ison of the plug-in and predictive methods for two multivariate normalpopulations. They were (correctly) criticized by Moran & Murphy(1979) for using the accuracy of the estimation of the log=odds as thebasis of comparison rather than error rates, and for including mainlyequal sample sizes of the two classes. Moran & Murphy's results showvery little difference in the error rates, and show that for estimationof the log-odds the debiasing methods of Section 2.5 are effective inremoving the dramatic optimism of the plug-in method where it occurs.
热点排行