computer vision - Feature combination/joint features in supervised learning -


while trying come appropriate features supervised learning problem had following idea , wondered if makes sense , if so, how algorithmically formulate it.

in image want classify 2 regions, i.e. 2 "types" of pixels. have bounded structure, let's take circle, , know can limit search space circle. within circle want find segmenting contour, i.e. contour separates pixels inner class , outer class b.

i want implement following model:

i know pixels close bounding circle more in outer class b.

of course, can use distance bounding circle feature, algorithm learn average distance of inner contour bounding circle.

but: wonder if can exploit model assumption in smarter way. 1 heuristic idea weigh other features distance, say, if pixel further away bounding circle wants belong outer class b, has have convincing other features.

this leads general question:

how can 1 exploit joint information of features, prior individually learned algorithm?

and specific question:

in outlined setup, heuristic idea make sense? @ point of algorithm should information used? recommended literature or buzzwords if wanted search similar ideas in literature?

this leads general question:

how can 1 exploit joint information of features, prior individually learned algorithm?

it not clear asking here. mean "individually learned algorithm" , "joiint information"? first of all, problem broad, there no such tring "generic supervised learning model", each of them works in @ least different way, falling 3 classes:

  • building regression model of kind, map input data output , agregate results classification (linear regression, artificial neural networks)
  • building geometrical separation of data (like support vector machines, classification-soms' etc.)
  • directly (more or less) estimating probability of given classes (like naive bayes, classification restricted boltzmann machines etc.)

in each of them, there somehow encoded "joint information" regarding features - classification function joint information. in cases easy interpret (linear regression) , in impossible (deep boltzmann machines, deep architectures).

and specific question:

in outlined setup, heuristic idea make sense? @ point of algorithm should information used? recommended literature or buzzwords if wanted search similar ideas in literature?

to best knowledge concept quite doubtfull. many models tends learn , work better, if data uncorrelated, while trying opposite - correlate particular feature. leads 1 main concern - why doing this? force model use feature?

  • if important - maybe supervised learning not idea, maybe can directly model problem appling set of simple rules based on particular feature?
  • if know feature important, aware in cases other things matter, , cannot model them, problem how much weight feature. should distance*other_feature? why not sqrt(distance)*feature? log(distance)*feature? there countless possibilities, , seek best weighting scheme may more costfull, finding better machine learning model, can learn data raw features.
  • if suspect importance of feature, best possible option to... not trust belief. numerous studies have shown, machine learning models better in selecting features humans. in fact, whole point of non-linear models.

in literature, problem trying solve refered incorporating expert knowledge learning process. there thousands of examples, there kind of knowledge cannot directly encoded in data representation, yet valuable omit it. should research terms "machine learning expert knowledge", , possible synomyms.


Comments

Popular posts from this blog

c# - Send Image in Json : 400 Bad request -

jquery - Fancybox - apply a function to several elements -

An easy way to program an Android keyboard layout app -