配色: 字号:
Object_Recognition_by_a_Mobile_Robot_using_Omni-directional_Vision
2016-02-09 | 阅:  转:  |  分享 
  
ObjectRecognitionbyaMobileRobotusing

Omni-directionalVision

HenrikAndreasson,TomDuckett

AASS,Dept.ofTechnology,currency1OrebroUniversity,SE-70182currency1Orebro

Abstract.Thispaperproposesanewmethodforrecognizingtypicalobjectsfoundin

indoorofceenvironments(tables,chairs,etc.,)byamobilerobotequippedwithan

omni-directionalvisionsensor,withoutrequiringanypre-installedgeometricmodels

ofobjects.Theapproachutilizesthemotionoftherobottoacquireaninternalrepre-

sentationofagivenobjectusingstructurefrommotionoropticow.First,asetof

low-levelpointfeaturesareselectedfromthesegmentedareaoftheimagecontaining

theobject.Thelow-levelfeaturesaretrackedbyasetofindependentKalmanltersas

therobotmovesthroughtheenvironment,inordertoextractthe3Dpositionsofthese

points.Asetofhigh-levelfeaturesisthenextractedforinputtoapatternrecognition

system,basedonthespatialdistributionofthelow-levelpointfeatures.Thesamefea-

tureextractionmethodisthenappliedforrecognitionofthelearnedobjects.Results

arepresentedforsomerstexperimentsonarealrobotinalaboratoryenvironment.

1Introduction

Theproblemofobjectrecognitionisacentraltopicofinterestforresearchersincomputer

vision,articialintelligence,thecognitivesciencesandrobotics.Withoutthisability,thepos-

sibilitiesforrobotstocarryoutusefultasksremainlimited.However,afteroverftyyearsof

research,therestillexistsnogeneralpurposealgorithmforobjectrecognitionbyautonomous

robots.Instead,theproblemisusuallysolvedonasystem-by-systembasis,usingrecognition

techniquesthatarehand-craftedforasmallclassofobjectsinoneparticularapplication.Any

improvementinobjectrecognitiontechnologywouldbeuseful,andasignicantadvance

couldrevolutionizethestudyofembodiedintelligentsystemssuchasrobots.

Thereareanumberofusefulpropertiesthatanyobjectrecognitionmethodforanintel-

ligentrobotshouldhaveinordertobeusefulforapplicationsinreal-worldenvironments.It

shouldbeabletotakecareofobjectswithallkindsofshapes,andtosenseobjectsinaclut-

teredandoccludedworld.Itshouldbeinvarianttovariationsinscale,translation,lighting

conditions,etc.,androbusttotheextranoisecausedbythemotionoftherobot.Furthermore,

thereshouldbenoneedforpre-installedobjectmodels,andthealgorithmshouldbeableto

runinreal-time.Anidealapproachwouldbetolettherobotdrivearoundintheenvironment

andlearnitsowninternalmodelsofdetectedobjectsfromitsownsensoryperceptions,and

thentobeabletorecognizethesametypesofobjectsusingthesameperceptualapparatus.

Thispaperpresentsarsteffortatbuildingacompleteobjectrecognitionsystemfora

mobilerobot,usingomni-directionalvisionasthemainsensoryinput.Ourapproachexploits

themobilityoftherobot,usingthisinformationtoextractstructurefrommotionfroma

sequenceofimages.Inthispaper,weconsidertherecognitionoftypicalobjectsfoundinin-

doorofceenvironments(wherestate-of-the-artmobilerobotsarecurrentlyabletonavigate),

PATTERNRECOGNITION

objectclassification

features/histogram

EXTRACTION

HIGHLEVELFEATURE

(x,y,z)

pointsvelocites

setofpointsand

LOWLEVELFEATURE

EXTRACTION

birdviewimage

IMAGETRANSFORM

segmentedomniimage

SEGMENTATION

omnicamimagex,y,th

robotsodometry

TRACKING

Figure1:Left:overviewoftherecognitionalgorithm.Theboxesindicatethemainsteps,withintermediatedata

structures.Dashedboxesindicatestepsthatareperformedmanuallyinthecurrentimplementation.Right:robot

platform(ActivmediaPeoplebot).Theomni-directionalcameraismountedontopoftherobot.

includingtables,chairsandtrash-cans.Itisassumedthattheseobjectsareorientedconsis-

tentlyintheverticalaxis,i.e.,chairsandtablesremainuprightanddonotfallover,butthe

recognitionalgorithmshouldbeinvarianttorotationsaroundtheverticalaxis.Inthecurrent

implementation,theareaoftheimagecontainingtheobjectisrstsegmentedbyhand:future

workwillinvestigateafullyautomaticsystem.Anoverviewoftherecognitionalgorithm,to-

getherwithadetaileddescriptionofitscomponentfunctions,canbefoundinSection2(see

alsoFig.1).Thisisfollowedbyexperimentalresults(Section3),togetherwithconclusions

andsuggestionsforfuturework(Section4).

1.1RelatedWork

Themostcommonapproachin3Dobjectrecognitionistocollectasetof2Drepresentations

oftheobjectfromdifferentviews,withoutrequiringadeepunderstandingoftheunderly-

ing3Dstructureoftheobject.Theviewscanberepresentedbyanaspectgraph[2],where

Figure2:Left:Originalimagefromtheomni-cam.Right:transformedbirdviewimage,size:400x400pixels,

resolution:40pixels/meter.

eachviewisconnectedtoitsclosestneighbours.Thefeaturesusedforrecognitioncanbe

globalshapemodels[6],HOTcurves[13],ormorelocalrepresentationssuchasedges[14].

Otherrecognitionmethodsthatdonotrequireexplicitgeometricinformationoftenusecolour

andluminanceinformation,e.g.,withcolourcooccurencehistograms[5],phase-basedlocal

features[4],orprincipalcomponentsanalysis[7].

Ifa3Dsurfacemodeloftheobjectisavailable,thenrepresentationbyspinimagescan

beused[12].Spinimagesarerepresentationsofsurfacesthatareconstructedfromadense

collectionofpointsandaresuitableforregistrationormatchingofsurfaces.Thistechnique

hasbeenshowntoworkwellinclutteredenvironments.Otherkindsofsensors,suchasrange-

ndersensors,canalsobeusedtoextract3Dsurfacemodelsforobjectrecognition[15].

Theuseofmotiontodetect3DstructureisoftencalledStructurefromMotion(SfM).For

aintroductiontothistopicsee[11].Thefocusisusuallyonhowtodetectthemotionofthe

cameraortheobjectwithoutanypriorinformationaboutthecorrespondenceinformation.

MuchoftheresearchinSfMconcernsndingouttheCorrespondedStructurefromMotion,

whichassumesthatthecameraparametersandthe3Dmotionbetweenthecameraandobject

isunknown,whichisnotthecaseinourmethod.

2Method

Ourapproachistouseamobilerobotwithasinglecamera,withoutrequiringanypre-

installedmodelsoftheobjects.Thecameraisomni-directional,i.e.,theviewingangleis

(almost)360degrees.Itisplacedontopoftherobot,lookingdownwardsabovetheoor.

Thecameraisxedanditisonlytherobotthatmoves.Inthecurrentexperiments,therobot

onlytravelsforwardswithoutrotation.Thesamesensors(omni-camplusodometryforesti-

matingself-motion)areusedforbothtrainingoftheclassiersandrecognitionoftheobjects.

Bymovingtherobotaroundinaknownmannerandmeasuringthepixeldisplacementina

sequenceofimages,the3Dstructureoftheobjectisestimated.

AnoverviewoftherecognitionalgorithmisgiveninFig.1.Theareaoftheimagecon-

tainingtheobjectisrstsegmentedmanually.Aftertransformationoftheimagetoabird’s

eyevieworbirdview(Fig.2),asetoflow-levelpointfeaturesareextracted(Fig.3).The

Figure3:Fromlefttoright:segmentedomni-image,birdview,low-levelpointfeaturesthataretracked,and

high-levelfeatureextractionforpatternrecognition.

pointfeaturesaretrackedbyasetofindependentKalmanltersinordertoestimatetheir

3Dcoordinates,byreferencetothegroundvelocityoftherobot.Thetrackedpointsarethen

groupedbyahistogramaccordingtotheirrelativeheightintheworld(Fig.3,right)inorder

toobtainasetofhigh-levelfeaturesforinputtoapatternrecognitionsystem.Theindividual

stepsofthealgorithmaredescribedindetailasfollows.

2.1Segmentation

Intheexperimentspresentedinthispaper,objectsweremanuallysegmentedintheorigi-

nalomni-camimages.Imageswerecollectedwiththerobotdrivingpastastationaryobject

standinginfrontofawhitebackgroundintheroboticslaboratoryatourinstitute.Thebor-

deroftheobjectwasmanuallyselected(usingtheGNUimagemanipulationprogram‘The

GIMP’)andtherestoftheimagewaslledwithwhite.

2.2ImageTransformation

Duethecurvatureofthemirrorintheomni-cam,itisdifculttoextractgeometricalfeatures

directlyfromtherawimages(e.g.,horizontalsurfacesappeartwisted).Instead,atransforma-

tiontoa‘birdview’isused,whichcanbedenedasanimagetakenfromaviewlocatedhigh

abovethesurface.Thebirdviewtransformstheimageinordertokeepthephysicalshape

intactinthegroundplane.Forexample,achessboardlyinghorizontalatanyheightwillgive

anon-twistedchessboardinthebirdviewtransformation.

Thesizeofanobjectinthebirdviewwillincreasewithheight.Thismeansthathorizontal

areaswillnotchangeusingthebirdviewtransformation.Linesintherealworldwilltrans-

formintolinesinthebirdview,comparedtoarcsintheoriginalomni-camimage.Areasthat

arehigherwillseemtobebiggerandfurtherawayfromtheimagecenter.Theresolutionof

thebirdviewisgivenbypixels/meteratthegroundlevel,andthereforethepixelcoordinates

canbemappeddirectlytoaworldcoordinatesystem.Foramoredetaileddescriptionabout

transformationsonomni-camimagessee[10].

Totransformanimageintoabirdview,thetransformationfunctionthatconvertsfroma

a0a2a1a4a3a6a5a8a7coordinateintherealworldtoapixelintheomni-camimagehastobeknown.Since

meterR

rpixel

meterR

rpixel

a9a10a9

a9a10a9

a9a10a9

a9a10a9

a11a10a11

a11a10a11

a11a10a11

a11a10a11

a12a10a12a13

a14a10a14a15a10a15a16

a16

a16

a17

a17

a17

a18a19

Figure4:Calibrationoftheomni-camtondthebirdviewtransformationfunction.

thedistanceisinvarianttotheorientationofthecamera,thetransformationfunctioncanbe

writtenasa20a22a21a24a23a26a25a27a23a26a28a30a29a32a31

a0a34a33a6a35a37a36a39a38

a23a26a40

a7a41a3(1)

where

a20a22a21a42a23a34a25a27a23a26a28

isthedistancefromthecenterofthecameratothepointonthegroundlevel

intherealworld,anda33a37a35a37a36a39a38

a23a43a40

isthedistancecalculatedinpixelsintheomni-camimagefrom

thecentertothepixelcorrespondingtothatpointintherealworld.Thisfunctioncanbe

calculatedanalyticallyiftheparametersforthemirrorandthecameraareknown,whichis

rarelythecase.Thefunctionusedinourexperimentswasapolynomialofdegree3interpo-

latedwithastandardleastsquarettingalgorithm,byusingimageswherethedistanceand

thecorrespondingpixelswereknown.Tospeedupthetransformation,alook-uptablewith

memorypointerstothepixelsintheomni-camimagewascreated.

2.3Low-levelFeatureExtraction

Thepointstotrackareselectedintherstimageinthesequence.Pointsthatarelocated

oncornersofobjectsarethemosteasytotrack.Toselectpointswithstrongmatching

capabilities,aneighbourhooda44of3x3pixelsisselectedaroundeachpixelintheim-

age.Thederivativesa45a38anda45a47a46arecalculatedwithaSobeloperatorforallpixelsinthe

blocka44.Foreachpixe,ltheminimumeigenvaluea48iscalculatedformatrixa49wherea49

a29

a50a51

a45a53a52

a38a55a54a57a56a58

a51

a45

a38a55a54a57a56a58

a45a47a46

a54a27a56a58

a51

a45

a38a55a54a27a56a58

a45a47a46

a54a57a56a58a51

a45a53a52

a46

a54a27a56a58a59and

a51isperformedovertheneighbourhoodof

a44.Thepix-

elswiththehighestvaluesofa48arethenselectedbythresholding.Forfurtherdetailssee[16],

orthefunctioncvGoodFeaturesToTrackintheOpenCVlibrary[3].

2.4TrackingoftheLow-levelFeatures

Thenextstageistotrackthepointsastherobotdrivespasttheobjectataconstantspeed,

usingthesequenceofimages.Bytrackingthepointswiththebirdviewprojection,itis

possibletoestimatetheheight(a60-coordinate)ofapointdirectlyfromitsrelativevelocityin

theimagesequence,andthentousethisinformationtoestimatethehorizontalposition(a1-

anda5-coordinates).Inourapproach,thepointsaretrackedwiththeiterativeLucas-Kanade

P0(tk)

P0)(tk+1

Pbv

P)(tk+1

P(tk)h

h

BirdViewOrigin

Figure5:Estimationofthea61a27a62a64a63a26a65a67a66positionfromthetrackedpoint,giventhecorrespondinga68-coordinate.

methodusingpyramids[1].Theideaistondthedisplacementa69

a38and

a69a70a46thatminimizesa71

as

a71

a0

a69

a38a70a3

a69a72a46

a7

a29a73a75a74a77a76a79a78a67a74

a80

a38a41a81

a73a74a75a82a78a74

a73a41a83a84a76a79a78a67a83

a80

a46

a81

a73a83a55a82a78a83

a0a77a0a86a85a87a0a86a1a4a3a6a5a88a7a24a89a91a90a92a0a86a1a94a93

a69

a38a70a3a6a5a95a93

a69a72a46

a7a84a7a41a3(2)

wherea85istheimageattimea96,a90istheimageattimea96a93a69a72a96anda0a34a97a87a38a98a3a6a97a46a7isthepointto

track.a99a38anda99a100a46refertothesizeoftheareathatisminimized.Intheseexperiments,values

ofa99a38

a29a102a101

anda99a38

a29a103a101

wereused.

Tohandlelargepixeldisplacementswithoutrequiringtoomuchcomputation,theimages

a85anda90aredividedinto3-4moreimagesthataresub-sampledbyafactorof2.Therstmin-

imizationofa71andtherstestimationofa69a38anda69a72a46areperformedonthelowestsub-sampled

image.Thenminimizationisperformediterativelyonthenextlevelusingthepreviousesti-

matesofa69a38anda69a72a46,andsoon.Thismakesitpossibletotrackpointswithlargedisplacements

withhighprecision.Forafulldescriptionofthisalgorithmsee[1].Toremovenoiseandto

estimatethevelocityofthepoints,anindependentKalmanlterisappliedtoeachofthe

pointstracked[9].Inthisworkweassumethattherobottravelsforwardwithoutrotation,so

odometryisusedonlytoestimatetheheighta60ofthetrackedpoint.

2.5High-levelFeatureExtraction

Theheighta60ofapointin3Dspaceisafunctionofitsapparentvelocityintheimagese-

quence.Apointwithhighervelocityshouldbelocatedhigherthanapointwithlowerve-

locity,assumingthatthecorrespondingobjectisstationary.Sincepointsthatarehigherare

alsolocatedfurtherawayfromtheorigininthebirdview,theapparentvelocitycanalsobe

usedtoestimatethea1-anda5-coordinatesusingtheprojectionshowninFig.5.Theground

levelpixelvelocitya104a41a105a24a106a0a96a108a107a7a42a89a105a24a106a0a96a108a107

a76a87a109

a7

a104isrstestimatedusingtheodometryoftherobot.

Thepixeldisplacementa104a41a105a42a110a0a96a108a107a7a42a89a105a24a110a0a96a108a107

a76a87a109

a7

a104atheighta111andthedistanceofthetrackedpoint

fromthebirdviewcentera104a41a105a42a112a114a113a89a105a24a110a0a96a108a107

a76a87a109

a7

a104isthencalculated.Thedistancethatthepoint

shouldbemovedtowardstheoriginofthebirdviewinordertogivethecorrespondinga1-

anda5-positionatgroundlevelcanthenbecalculatedas

a104a41a105a24a110

a0

a96a108a107

a76a87a109

a7a42a89

a105a24a106

a0

a96a108a107

a76a87a109

a7

a104

a29

a104a41a105a24a112a114a113

a89

a105a100a110

a0

a96a108a107

a76a87a109

a7

a104a116a115a118a117

a89

a104a41a105a24a106

a0

a96a108a107

a7a42a89

a105a100a106

a0

a96a108a107

a76a87a109

a7

a104

a104a41a105a24a110

a0

a96a108a107

a7a42a89

a105a100a110

a0

a96a108a107

a76a87a109

a7

a104a120a119a122a121

(3)

Toobtainthehigh-levelfeaturevaluesrequiredforinputtothepatternrecognitionsys-

tem,ahistogramisconstructedbasedonthevelocitydistributionofthetrackedpoints(see

Fig.6).Intheseexperiments,histogramswereusedwithsevenbinscorrespondingtoseven

speedintervalsbetween3.0pixelsperframeto9.0pixelsperframe.Theapproachissimple

2

6

4

8

2

6

4

8

nrofpoints

93

pixelvelocity

(pixels/frame)

nrofpoints

93

pixelvelocity

(pixels/frame)

Figure6:Histogramshowingnumberofpointswithdifferentpixelvelocities,left-cone,right-chair3.

Figure7:Fromlefttoright:chair1,chair2,chair3,table1,table2,drawers,bottle,cone,trashcan.

buteffective,providedthatobjectsareorientedconsistenlyintheverticalaxis.Itshouldbe

invarianttorotationsaroundtheverticalaxis,thoughitwouldfailifanobjectisknocked

over,turnedupside-down,etc.Moresophisticatedmethodsforfeatureextractionwillbesub-

jecttofutureresearch:forexample,itshouldbepossibletorecoverinformationaboutthe

orientationoftheobjectsrecognisedifanappropriaterepresentationisused.

2.6PatternRecognition

Thepatternrecognitionmethodusedinthispaperisaverysimpleandintuitiveclassier

knownasaminimumdistanceclassier(mdc)[8].Inthismethod,meanvectorscalculated

fromthetrainingdataforeachclassareassumedtobeidealprototypesfortheobjects.To

classifyanewinputvector,theEuclideandistancetoeachoftheprototypesiscalculated,

andthevectorisassignedtotheclasswiththeshortestdistance.Equivalently,thedecision

functionforaminimumdistanceclassiercanbewrittenas

a69a67a123

a0a86a124a100a7

a29

a124a87a125a87a126

a123

a89a117

a127

a126a128a125

a123

a126

a123

a3(4)

wherea124isthepatternvectortobeclassied,anda126a123isthemeanvectorofeachclassa99a129a123.

Classicationofagivenobjectisthendeterminedbytheclassthatproducesthehighest

decisionvalue.

3ExperimentalResults

Themethodwastestedusingimagedatarecordedwiththemobilerobotusing9different

objects(seeFig7).Alsoshownarethenumberofviewsfromwhichdatawascollected(e.g.,

drivingtherobotpastthe‘North’,‘South’,‘East’or‘West’sideoftheobject),andthetotal

NameDescriptionNo.ofViewsNo.ofImagesCorrectclassication

chair1Ofcechair416088%

chair2Regularchair416063%

chair3Ofcechair4160100%

table1Squaretable28075%

table2Roundtable14088%

drawersChestofdrawers4160100%

bottle1.2mcylinder14085%

conePlasticcone140100%

trashcanGreentrashcan140100%

Table1:Objectsusedandclassicationresults.

numberofimagesrecorded.Thesequenceofimagesforeachobjectwereseparatedinto

20smallersequencescontaining10imageseach.Foreachsequencethepixelsspeedswere

estimatedandthehistogramcreated.Classicationwasrepeated100times,usingarandomly

selectedsetof70%ofthedatafortrainingandtheremainingdatafortesting.Theaverageof

theseresultsisgiveninTable1.Chair2hadthelowestrateofcorrectclassications,dueto

thefactthattheinitialdistributionoflow-levelfeaturesvariedalotbetweendifferentviews.

Furtherworkisneededtoaddressthisproblem.Thetotalcomputationtimeforoneiteration

ofthealgorithm(excludinghandsegmentation)wasa130a131a117a133a132a72a132msona2GHzPentium4,which

indicatesthatrecognitioninreal-timeispossible.

4ConclusionandFutureWork

Inthispaper,wehavepresentedarstattemptatrecognitionoftypicalobjectsfoundinin-

doorofceenvironmentsbyamobilerobot.Therobotlearnsitsowninternalrepresentation

ofagivenobjectfromitsownsensoryperceptionsasittravelspastthatobject,bycombining

exteroceptivesensoryinformationfromanomni-directionalcamerawithproprioceptivesen-

soryinformation(self-motion)fromodometry.Themethodconstrainstheobjectrecognition

problembyexploitingthephysicalpropertiesofthetherobotanditsinteractionwiththe

environment.Atpresent,wehaveonlyconsideredrecognitionofsomeselectedobjectsina

laboratoryenvironmentusinghandsegmentationoftheimages,buttheexperimentsdemon-

stratetheconceptofrecognisingobjectstructurefrommotioninanembodiedintelligent

system.Thepatternrecognitionsystemonlyusesinformationconcerningtheheighta60ofthe

trackedpointfeatures,andthepointdistributioninthehorizontalplanesisnotconsidered.

Futureworkwillincludeimprovementsatalllevelsoftherecognitionalgorithm,forexam-

ple,automaticsegmentationofobjectsbyclusteringoflow-levelpointfeatureswithsimilar

attributes;furtherexploitationoftheembodimentoftherobot,e.g.,byattemptingtopushob-

jects,learningaffordancesofobjects,etc.;betterhigh-levelfeaturestoallowagreaterlevel

ofdiscriminationbetweendifferentobjecttypes;moresophisticatedpatternrecognitiontech-

niques;integrationofdifferentsensormodalities,e.g.,fovealvision,thermalvision,laserand

ultrasonicrange-ndersensors,etc.;discriminationofmovingobjectssuchashumansfrom

non-movingobjects;andexperimentsinclutteredenvironments.

References

[1]Jean-YvesBouguet.PyramidalimplementationoftheLucasKanadefeaturetracker,descriptionofthe

algorithm.Technicalreport,IntelCorporation,MicroprocessorResearchLabs,1999.

[2]K.W.BowyerandC.R.Dyer.Aspectgraphs:anintroductionandsurveyofrecentresults.International

JournalofImageingSystemsandTechnology,2:315328,1990.

[3]GaryRBradski.OpenSourceComputerVisionLibrary.IntelCorporation,2001.

[4]GustavoCarneiroandAllanD.Jepson.Multi-scalephase-basedlocalfeatures.volume1,pages736743.

IEEEComputerVisionandPatternRecognition,2003.

[5]P.ChangandJ.Krumm.Objectrecognitionwihcolorcooccurrencehistograms.IEEEComputerVision

andPatternRecognition,1999.

[6]C.M.CynandB.B.Kimia.3Dobjectrecognitionusingshapesimilarity-basedaspectgraph.International

ConferenceonComputerVision,2001.

[7]R.Dillman,M.Ehrenmann,andM.Ambela.Acomparisonoffourfastvisionbasedobjectrecognition

methods.InProceedingsoftheInternationalConferenceonRoboticsandAutomation,pages18621867.

IEEE,2000.

[8]RichardO.Duda,PeterE.Hart,andDavidG.Stork.PatternClassication.Wiley-Interscience,2000.

[9]ArthurGelb,editor.AppliedOptimalEstimation.TheMITPress,1974.

[10]ChristopherGeyerandKostasDaniilidis.Paracatadioptriccameracalibration.IEEETransactionson

PatternAnalysisandMachineIntelligence,24(5):687695,2002.

[11]TonyJebara,AliAzarbayejani,andAlexPentland.3Dstructurefrom2Dmotion,1999.

[12]AndrewJohnsonandMartialHebert.Usingspinimagesforefcientobjectrecognitionincluttered3D

scenes.IEEETransactionsonPatternAnalysisandMachineIntelligence,21(5):433449,May1999.

[13]D.J.Kriegman,B.Vijayakumar,andJ.Ponce.ReconstructionofHOTcurvesfromimagesequences.In

CVPR93,pages2026,1993.

[14]ArthurR.PopeandDavidG.Lowe.Probabilisticmodelsofappearancefor3Dobjectrecognition.Inter-

nationalJournalofComputerVision,40(2):149167,2000.

[15]RajeshP.N.RaoandDanaH.Ballard.Objectindexingusinganiconicsparsedistributedmemory.

TechnicalReportTR559,1995.

[16]JianboShiandCarloTomasi.Goodfeaturestotrack.InIEEEConferenceonComputerVisionandPattern

Recognition(CVPR’94),Seattle,June1994.

献花(0)
+1
(本文系AGV论坛首藏)