The Beginning

?
  • Created by: Tenel Ka
  • Created on: 25-06-17 16:16
Learning
improve over task T with respect to performance measure P based on experience E
1 of 24
Design choices
experience type, target function, learning algorithm, representation
2 of 24
Data Mining Solutions
Hybrid system, unsupervised algorithms,
3 of 24
KDD
knowledge dicovery in databases, 1989
4 of 24
Concept Learning
search in the space of hypotheses H for the ypothesis best fitting the example
5 of 24
The Inductive Learning Hypothesis
Any hypothesis fpound to approximate the function well over a sufficient large set of training samples will also work well over unobserved samples.
6 of 24
Find-S algorithm
Initialize to most specific. For each positive training instance: if attribute constraint doesn't satisfy: Generalize it.
7 of 24
Complaints to Find-S
only takes positive examples, might miss the concept, inconsisteny in training data isn't detected, might miss different solutions
8 of 24
more general
iff all instances classified positive by h1 are also classifies positive by h2
9 of 24
LMS weight update rule
Select exp. at random. Calculate with learned function on current weights. Calculate error. Update weights w1 + epsilon*feature1*error
10 of 24
VS (H,D)
subset of hypotheses from H consistend will all training exps D, defined by S & G boundary
11 of 24
List-the-Eliminate
-> to obtain VS, Start with H, for each x in D and for each h in H: if h doesn't match c(x): Remove.
12 of 24
Characteristics of List-then Eliminate
finite H, computes complete VS, ideally on ehypo remains, unpractical
13 of 24
large/ no VS
more exp., criteria to select h / -> inconsistent samples
14 of 24
What will Candiate-Elim do on H'
can learn every imaginable target concept, no generalization beyond observed examples, converges to one hypo only when all instances have been represented
15 of 24
H'
H-like, but also allows disjunctions
16 of 24
Learning system L
Dc ^xi >> L(xi, Dc) (>> inductive inference)
17 of 24
inductive bias
minimal set of assertions B such that: for all xi € X: B ^Dc ^xi |= L(xi, Dc) (|= deductive inference)
18 of 24
Decision Trees
learn discrete valued target functions, disjunction of conjunction of attribute values, writeble as set of rules
19 of 24
representation of decision trees
internal node - attribute branch - one value leaf node - classification
20 of 24
application of DT
instances describable by attribute value pairs, discrete valued target function, disjunctive hypo may be required, possibly noisy training data
21 of 24
learning a decision tree
finding the best order to ask for the attribute values
22 of 24
ID3
Iterative Dichotomizer, 1986, find best attribute by the distribution of its values over the examples, put that one at the root
23 of 24
Main Loop of ID3
Get best decision attribute, assign it to the node, for each value create new descendant node, sort training exps accordingly, continue until training exps are perfectly classified
24 of 24

Other cards in this set

Card 2

Front

experience type, target function, learning algorithm, representation

Back

Design choices

Card 3

Front

Hybrid system, unsupervised algorithms,

Back

Preview of the back of card 3

Card 4

Front

knowledge dicovery in databases, 1989

Back

Preview of the back of card 4

Card 5

Front

search in the space of hypotheses H for the ypothesis best fitting the example

Back

Preview of the back of card 5
View more cards

Comments

No comments have yet been made

Similar Computing resources:

See all Computing resources »See all Machine Learning resources »