您好,欢迎来到爱go旅游网。
搜索
您的当前位置:首页Mining Discriminative Components With Low-Rank And Sparsity Constraints for Face Recognition

Mining Discriminative Components With Low-Rank And Sparsity Constraints for Face Recognition

来源:爱go旅游网
MiningDiscriminativeComponentsWithLow-RankAnd

SparsityConstraintsforFaceRecognition

ComputerScienceandEngineering

ArizonaStateUniversityTempe,AZ,85281

QiangZhang,BaoxinLi

qzhang53,baoxin.li@asu.edu

ABSTRACT

Thispaperintroducesanovelimagedecompositionapproachforanensembleofcorrelatedimages,usinglow-rankandsparsityconstraints.Eachimageisdecomposedasacombi-nationofthreecomponents:onecommoncomponent,oneconditioncomponent,whichisassumedtobealow-rankmatrix,andasparseresidual.ForasetoffaceimagesofNsubjects,thedecompositionfindsNcommoncomponents,oneforeachsubject,Klow-rankcomponents,eachcap-turingadifferentglobalconditionoftheset(e.g.,differentilluminationconditions),andasparseresidualforeachin-putimage.Throughthisdecomposition,theproposedap-proachrecoversacleanfaceimage(thecommoncomponent)foreachsubjectanddiscoverstheconditions(theconditioncomponentsandthesparseresiduals)oftheimagesintheset.ThesetofN+Kimagescontainingonlythecommonandthelow-rankcomponentsformacompactanddiscrim-inativerepresentationfortheoriginalimages.WedesignaclassifierusingonlytheseN+Kimages.Experimentsoncommonly-usedfacedatasetsdemonstratetheeffective-nessoftheapproachforfacerecognitionthroughcompar-ingwiththeleadingstate-of-the-artintheliterature.Theexperimentsfurthershowgoodaccuracyinclassifyingtheconditionofaninputimage,suggestingthatthecomponentsfromtheproposeddecompositionindeedcapturephysicallymeaningfulfeaturesoftheinput.

1.INTRODUCTION

Facerecognitionhasbeenanactiveresearchfieldforafewdecades,anditschallengesandimportancecontinuetoattracteffortsfrommanyresearchers,resultinginmanynewapproachesinrecentyears.Themostrecentliteraturemaybedividedintoroughlytwogroups,wheremethodsinthefirstgrouptrytomodelthephysicalprocessesofimagefor-mationunderdifferentconditions(e.g.,illumination,expres-sion,poseetc.).Forexample,theapproachof[10]modelsthefaceimageundervaryingilluminationconditionstobealinearcombinationofimagesofthesamesubjectcapturedat9speciallydesignedilluminationconditions;theSRCal-gorithmof[19]furtherassumesthatfaceimageswithillu-minationandexpressionconditionscanberepresentedasasparselinearcombinationofthetraininginstances(i.e.,thedictionaryatoms).Ontheotherhand,thesecondgroupofapproachesutilizesmathematical/statisticaltoolstocap-turethelatentrelationsamongfaceimagesforclassification.E.g.,theSUNapproach[7]usesthestatisticsofthehumanfixationoftheimagestorecognizethefaceimages,Volter-rafaces[9]findsalatentspaceforfacerecognition,wheretheratioofintra-classdistanceoverinter-classdistanceismin-imized.Onemajoradvantageofthetechniquesinthefirstclasscomesfromtheirbeinggenerativeinnature,whichal-lowsthesemethodstoaccomplishtaskslikefacerelightingornovelposegenerationinadditiontorecognition.Thesecondgroupofmethodsinasenseignoresthephysicalpropertyofthefacesimagesandtreatsthemasordinary2Dsignals.Althoughthemethodsinthefirstgrouphavetheaboveniceproperty,abaselineimplementationusuallyrequiresdictionarieswithtrainingimagesasatomsandthusmayfacethescalabilityissueinreal-worldapplicationswithahugenumberofsubjects.Henceeffortshavealsobeende-votedtoreducingthesizeofthedictionarywhileattemptingtoretainthelevelofperformanceoftheoriginaldictionary.Examplesincludethosethatgeneratemorecompactdictio-nariesthroughsomelearningprocedure(e.g.,[13])andthosethatattempttoextractsubject-specificfeaturesthatareef-fectivelyusedasdictionaryatoms(e.g.,[15]).Ourapproachbelongstothesecondgroup.Sincetheexpressivepoweroftheoriginaldictionary-basedtechniquescomesfromlargelythenumberoftrainingimagesforeachsubject,acompactdictionarymaysufferfromdegradedperformanceunlessthereduceddictionaryproperlycapturestheconditionsoftheoriginaldatathatarecriticalforarecognitiontask.Forexample,themethodof[15],whileshowntobeeffectiveforexpression-invariantrecognition,isdifficultytogeneralizetohandleglobalconditionssuchasilluminationchange,which

CategoriesandSubjectDescriptors

H.4[InformationSystemsApplications]:Miscellaneous

GeneralTerms

Featureextractionandpreprocessing

Keywords

Subspacelearning,Low-rankmatrix,Sparsematrix,FaceRecognition,ComponentDecomposition

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. KDD’12, August 12–16, 2012, Beijing, China.

Copyright 2012 ACM 978-1-4503-1462-6/12/08... $15.00.

1469

oftenintroducetothedatanon-sparseconditionsthatcan-notbecapturedbythesparsitymodelproposedtherein.Recognizingthatnon-sparseconditionssuchasillumina-tionchangeandlargeocclusionarecriticalforfacerecogni-tion,andthatforatypicalapplicationwemayassumeonlyafinitenumberofsuchconditions(e.g.,arelativelysmallnumberofilluminationconditionsorotherconditions),inthispaper,weproposeamodelforrepresentingasetoffaceimagesbydecomposingthemintothreecomponents:acommoncomponentsharedbyimagesofthesamesubject,alow-rankcomponentcapturingnon-sparseglobalchanges,andasparseresidualcomponent.Suchadecompositionispartiallyinspiredbytheobservationthatthereconstructionoftheimagewiththetopfewsingularvaluesandthecorre-spondingsingularvectorsoftencapturetheglobalinforma-tionoftheimage,whichcanberepresentedbyalow-rankmatrix.Tothisend,agenericalgorithmisproposed,withtheoreticanalysisontheconvergenceandparameterselec-tion.Thelearnedcommonandlow-rankcomponentsformacompactanddiscriminativerepresentationoftheoriginalsetofimages.Aclassifieristhenbuiltbasedoncomparisonofsubspacesspannedbythesecomponentsandbyanovelimagetobeclassified.Thisisverycompactcomparedwiththenumberofatomsinanover-determineddictionarysuchasthatin[19].Further,byexplicitlymodelingnon-sparseconditions,theproposedapproachisabletohandlebothilluminationchangesandlargeocclusions,whichwouldfailmethodslike[15].

Todemonstratetheeffectivenessoftheproposedmethod,wefirstdesignsyntheticexperimentswithknowngroundtruthtoverifyitskeycapabilityinrecoveringtheunderlyingcommon,low-rankandsparsecomponents.Thenwereportresultsonthreecommonly-useddatasetsofrealfaceim-ages:theExtendedYaleBdataset[4],theCMUPIEdataset[17]andtheARdataset[14].Theexperimentsshowthat,theproposedapproachobtainedbetterperformancethantheSRCalgorithm[19],whichutilizesamuchlargerdic-tionary,andtheSUNapproach[7].TheproposedapproachalsoachievescomparableresulttoVolterrafaces,whichisthecurrentstate-of-the-artintheliteratureforafewcommonly-useddatasets.Inaddition,theproposedapproachcanex-plicitlymodelthemostimportantfeatureofthesubjectandtheconditionsinthedataset.Experimentsalsoshowthattheproposedmethodisrobusttosituationswhereanon-trivialpercentageofthetrainingimagesisunavailable.Fur-ther,thecapabilityoftheproposedapproachforclassifyingthetypeofconditionthataninputimageissubjecttoisalsodemonstratedbyextensiveexperiments.Thissuggeststhattheproposeddecompositionisabletoobtainphysicallymeaningfulandthuspotentiallydiscriminativecomponents.WeintroducetheproposedmethodinSection2,includingtheproposedmodel,thelearningalgorithmandtheclassifi-cationmethod.TheexperimentsarereportedandanalyzedinSection3.WeconcludeinSection4withasummaryoftheworkandbriefdiscussiononfuturework.

Inthepresentation,weuseuppercaseboldfontforma-trices,e.g.,X,lowercaseboldfontforvectors,e.g.,xandnormalfontforscalars,e.g.,x.{Xi,j}N,Mi=1,j=1denotesasetofN×Mmatrices,withXi,jasits(i,j)thmember.WeassumethatNisthenumberofthesubjects,andMthenumberofimagespersubject1.ThusXi,jreferstojthim-1

ageoftheithsubject.Whenthereisnoconfusion,wealsouseXtodenotetheset{Xi,j}N,Mi,j=1.

2.PROPOSEDMETHOD

Inthissection,wefirstpresentthegeneralformulationoftheproposedmodelinSection2.1,andthenpresentouralgorithmforobtainingthedesireddecompositioninSection2.2andanalysisofitsconvergenceinSec.2.3.Withthese,afacerecognitionalgorithmisthendesignedinSection2.4.

2.1DecomposingaFaceImageSet

Inmanyapplicationsofimageandsignalprocessing,weoftenconsiderasetofcorrelatedsignalsasanensemble.Forefficientrepresentation,asignalintheensemblecanoftenbeviewedasacombinationofacommoncomponent,whichissharedamongallthesignalsintheensemble,andaninnova-tioncomponent,whichisuniquetothissignal.Manybene-fitscanbedrawnfromthisdecompositionoftheensemble,suchasobtainingbettercompressionrateandbeingabletoextractmorerelevantfeatures.Infacerecognition,allthefaceimages,especiallythesubsetcorrespondingtoasub-ject,maybenaturallyviewedasformingsuchanensembleofcorrelatedsignals.Inasense,asparse-codingapproachlikeSRCimplicitlyfiguresoutthecorrelationoftheimagesintheensembleviathesparsecoefficientsunderthedictio-naryofthetrainingimages.

Inthiswork,weaimatdevelopinganewrepresentationofthisensemblesothatthefacerecognitiontaskcanbebettersupported.Inparticular,consideringthecommonchallengessuchasilluminationconditionsandlargeocclu-sions,wewanttohavearepresentationthatcanexplicitlymodelsuchconditions.Tothisend,weproposethefollowingdecompositionoffaceimagesXi,jintheensembleXas:

Xi,j=Ci+Aj+Ei,j,∀Xi,j∈X

(1)

whereCiisthecommonpartforSubjecti,Ajisalow-rankmatrix,andEi,jisasparseresidual.

OneessentialdifferencebetweentheproposedmethodandRobustPCA(RPCA[18]),isthatRPCAassumesthesig-nalsarelinearlydependent,withsomesparselycorruptedentriesinthesignals.Asaresult,theybuildabigmatrixwitheachsignalasavector.Thebigmatrixwouldnatu-rallybelow-rank(becauseoftheassumedinter-imagecorre-lation),inadditiontohavingasparsesetofentries.Ontheotherhand,theproposeddecompositionispartiallyinspiredbytheobservationthatthereconstructionoftheimagewithfirstfewsingularvaluesandthecorrespondingsingularvec-torsoftencapturetheglobalinformationoftheimage[12],e.g.,illuminationconditions,structuredpatterns,whichcanberepresentedbyalow-rankmatrix.Herethelow-rankcon-straintarisesfromcertainphysicalconditions(ratherthanduetointer-imagecorrelation),anditisimposedoneachindividualimage.Accordingly,werepresentimagesbyma-tricesratherthanvectors,unlikeothermethodslike[19,18].Withthis,wecanexpectthat:

Ciisamatrixrepresentingthecommoninformationofim-agesforSubjecti,i.e.,thecommoncomponents;numberofimages,whichcanalwaysbeachievedbyusingsomeblankimages,asituationtheproposedmethodcanhandle.

Forsimplicity,weassumethateachsubjecthasthesame

1470

Ajisalow-rankmatrixcapturingtheglobalinformationof

theimageXi,j,e.g.,illuminationconditions(Fig.3),structuredpatterns(Fig.1);andEi,jisasparsematrixpertainingtoimage-specificdetails

suchasexpressionconditionsornoisewithsparsesup-portintheimages.Inthismodeling,wehaveassumedMdifferentlow-rankmatrices,whichareresponsibleforMdifferentglobalcon-ditionssuchasilluminationconditionsorlargeocclusions,andtheyaresharedamongtheimagesofdifferentsubjects.However,imagesofeachsubjectdonotnecessarilycontainalltheMconditions,aswewillshowinSec.2.2.

Wecanalsoobtainavariantoftheabovemodelbycon-sideringtheRetinextheory,inwhichimageIcanberepre-sentedas:

I(p,q)=R(p,q)·L(p,q)

(2)

that,unlike[18]whereasetofimagesarestackedasvectorsofalow-rankmatrix,wedonotconverttheimagetoavectorinthedecompositionstage.

Toabsorbtheconstraintsintotheobjectivefunction,wecanreformulateEqn.5withaugmentedLagrangemultiplieras:

󰀂

C,A,E=argmin󰀍Aj󰀍∗+λi,j󰀍Ei,j󰀍1

C,A,E

μi,j

󰀍Xi,j−Ci−Aj−Ei,j󰀍2F2

++

i,j

(6)

whereR(x,y)isthereflectanceatlocation(x,y),whichde-pendsonthesurfaceproperty,L(x,y)istheillumination,and·iselement-wiseproduct.Convertingthisintothelog-arithmdomain,wehave

log(I)=log(R)+log(L)

(3)

Theaboveequationindicatesthatwecanrepresentthein-tensityofthefaceimageasfollows:

log(Xi,j)=Ci+Aj+Ei,j,∀Xi,j∈X

(4)

whereCi=log(R)capturesthecommonpropertyoftheim-agesforSubjecti,Aj=log(L)capturesthelightingcondi-tions,andEi,jcapturestheresidual.ThisisavariantofthemodelinEqn.1,andisespeciallysuitableforillumination-dominateddatasetssuchastheextendedYaleBdatasetandtheCMU-PIEdataset.

Withtheabovedecomposition,theentiredatasetcon-tainingN×MimagescanbecompactlyrepresentedbyNcommoncomponentsandKlow-rankcomponents.IfweextractthecommoncomponentCiforfaceimagesofSub-jectiunderdifferentconditions,weexpectthatthiscom-moncomponentCirepresentsthemostsignificantfeatureofthatsubject.Thesetofallthelearnedlow-rankcompo-nentsA={Aj}Mj=1representsallpossibleglobalconditionsoftheimagesintheset.HencewemayuseAandCitospanthesubspaceofthefaceimagesforSubjecti,where,intheidealcase,anyfaceimagesofthissubjectshouldliein,barringasparseresidual.Thissuggeststhatwecanutilizethesubspacesforfacerecognitionbyidentifyingwhichsub-spaceatestimageismorelikelytoliein,whichisdetailedinSec.2.4.

whereYi,jistheLagrangemultiplier,λi,jandμi,jarescalarscontrollingtheweightofsparsityandreconstructionerroraccordingly.Whenμissufficientlylarge,Eqn.6isequivalenttoEqn.5.Itisworthpointingoutthat,whileforclaritywehavewrittenonlytheexpressionforSubjecti,theoptimizationisactuallydonefortheentiresetofimages,sincethelow-rankcomponentsaredeemedasbeensharedbyallimages.

TosolvetheproblemofEqn.6,ablockcoordinatede-scentalgorithmmaybedesigned,witheachiterativestepsolvingaconvexoptimizationproblem[3][18]foroneoftheunknowns.Tothisend,wefirstdescribethefollowingthreesub-solutionsthatareneededineachiterationofsuchanalgorithm,whichcorrespondtosolvingonlyoneoftheun-knowns(blocks)whilefixingothers.

Sub-solution1:ForfindinganoptimalEi,jinthet-thiteration,wheretheproblemcanbewrittenas

Ei,j

=argminλi,j󰀍Ei,j󰀍1+

μi,j2E

󰀍XEi,j−Ei,j󰀍F+2

Ei,j

(7)

withXEi,j=Xi,j−Ci−Aj.Sowedothefollowingupdate

[6]:

Ei,j=S

λμi,j

(XEi,j+

1

Yi,j)μi,j

(8)

whereSτ(X)=sign(X)·max(0,|X|−τ).

Sub-solution2:ForfindinganoptimalAkinthet-thiteration,wheretheproblemcanbewrittenas

󰀂

Aj=argmin󰀍Aj󰀍∗(9)

Aj

+

μi,j2A

󰀍XAi,j−Aj󰀍F+2

i

2.2AnAlgorithmfortheDecomposition

BasedonEqn.1,weformulatethedecompositiontaskasthefollowingconstrainedoptimizationproblem,withanobjectivefunctionderivedfromtherequirementofdecom-posingasetofimagesintosomecommoncomponents,somelow-rankmatricesandthesparseresiduals:

󰀂

C,A,E=argmin󰀍Aj󰀍∗+λi,j󰀍Ei,j󰀍1

C,A,E

i,j

Weusethesingularvaluethresholdingalgorithm[2,5]:

󰀁A

Tiμi,jXi,j+Yi,j

󰀁←UΣV

iμi,j

Aj

=

USτ(Σ)VT

s.t.Xi,j=Ci+Aj+Ei,j,∀Xi,j∈X(5)󰀁

where󰀍Aj󰀍∗=iσi(Aj)isthenuclearnorm,󰀍Ei,j󰀍1=󰀁N,M

p,q|Ei,j(p,q)|isthe󰀘1normandE={Ei,j}i,j=1.Note

󰀁NwithXA.i,j=Xi,j−Ci−Ei,jandτ=iμi,j

Sub-solution3:ThesolutiontotheproblemoffindingoptimalCi

μi,j󰀂2C

argmin󰀍XC(10)i,j−Ci󰀍F+

2jCi

whereXCi,j=Xi,j−Aj−Ei,j,canbeobtaineddirectly(by

takingderivativesoftheobjectivefunctionandsettingto

1471

zero)as

󰀁Ci=

j

Yi,j+μi,jXCi,j󰀁

jμi,j

(11)

Asalludedearlier,theimagesofanygivensubjectmaynotrangeoverallpossibleMconditions.Thismaybeequiv-alentlyviewedasaproblemofsomeimagesaremissingforthesubject.Wenowshowhowthiscanbeaddressedinaprincipledway.AssumethatΩisthesetof(i,j)where

¯isthecomplementofΩ.TodealXi,jisavailableandΩ

withthosemissingentries,weonlyneedtosetYi,j,μi,j

¯intheinitializationstage.InandXi,jto0for(i,j)∈Ω

¯Theeachiteration,wedonotupdateEi,jfor(i,j)∈Ω.

proposeddecompositionalgorithmwillautomaticallyinferthemissingimages.

Withtheabovepreparation,wenowproposethefollow-ingAlgorithm1tosolveEqn.6:

Algorithm1:LearningtheDecompositionInput:X,Ω,N,M,ρ,λandτ;

N,MK

Output:{Ci}Ni=1,{Aj}j=1and{Ei,j}i,j=1;%Initialization

00

t=0,C0i=Aj=Ei,j=0;

Xi,j0τ

Y0i,j=󰀅Xi,j󰀅F,μi,j=󰀅Xi,j󰀅Ffor(i,j)∈Ω;

IdeallyafaceimagefromSubjectishouldlieinasub-spacespannedbyitscommoncomponentCiandthelow-rankcomponentsA.Therefore,weproposethefollowing

classificationschemebasedoncomparingthedistancebe-tweensubspacesspannedbythetrainingcomponentsandthosespannedbyreplacingthetrainingcommonbythetestimagey.WefirstbuildthesubspaceSiforsubjecti,whichcontainsallthelinearcombinationsoftheimagesofSubjectiunderallconditions,i.e.,

󰀂

wk×(ci+aj)∀w∈RM}(12)Si={x|x=

k

whereciandajisthevectorizedformofCiandAjrespec-tively.SubspaceSicanbesufficientlyrepresentedbyaset

of“basis”,i.e.,{ci+aj}Mj=1.Accordingly,wecanbuildthesubspaceSyforthetestimageyasthefollows:

󰀂

Sy={x|x=wk×(y+aj)∀w∈RM}(13)

k

Thenweusetheprincipalangles[8]betweenthesesubspace

tomeasuretheirsimilarities.Inthispaper,theprincipalanglesmeasurethecosinedistancebetweenthesubspaces,󰀁

whichiscalculatedass(Si,Sy)=kcos2(θk),whereθkisthekthprincipalanglebetweenSiandSy.Theassigniasthelabeloff,forwhichs(Si,Sy)ismaximal.

0

Y0/Ω;i,j=0,μi,j=0for(i,j)∈whilenotconvergeddo

SolveEi,jfor(i,j)∈ΩbySub-solution1:

SolveAjforj=1,2,...,MwithSub-solution2;SolveCifori=1,2,...,NusingSub-solution3;%UpdateYi,jandμi,jfor(i,j)∈Ω:+1t+1+1+1ttYt−At−Eti,j=Yi,j+μi,j(Xi,j−Ciji,j);+1tμti,j=μi,jρ;t=t+1;end

󰀁

3.EXPERIMENTALRESULTS

Experimentshavebeendonetoevaluatetheproposedmodelandalgorithms.Inthissection,wereportseveralsetsofresultsfromsuchexperiments.First,simulations(Sec.3.1)areemployedtodemonstratetheconvergenceandpa-rameterselectionoftheproposeddecompositionalgorithm.Then,weshowthedecompositionoftheimagesfromex-tendedYaleBdatasetandalsohowthelearnedcomponentscanbeusedtoreconstructnewimagesinSec.3.2.Finally,wedemonstratetheapplicationoftheproposedmethodandalgorithmsinclassificationtasks,includingfacerecognition(Sec.3.3)andidentifyingtheconditionsoftheimages(Sec.3.4).Theperformanceoftheproposedmethodinfacerecog-nitiontaskiscomparedwiththatofSRC[19],Volterrafaces[9]andSUN[7]on2commonlyuseddatasets,i.e.,extendedYaleB[10]andCMU-PIE[17].

whereforconvergence,wecheck

i,j

andifitissmallenough(e.g.,10−6),weterminatetheal-gorithm.λ,τandρarethreeparametersspecifiedininput,whicharediscussedinSec.3.1.

󰀅Xi,j−Ci−Aj−Ei,j󰀅2F

󰀁

i,j󰀅Xi,j󰀅2

F

2.3ConvergenceoftheAlgorithm

Theconvergencepropertyofaniterativeoptimizationpro-cedurelikethealgorithmproposedaboveiscriticaltoitsuse-fulness.TheAlgorithm1hassimilarconvergencepropertyasthemethodsdescribedin[11],whicharealsoaugmentedLagrangemultiplierbasedapproaches.Wecandrawthefol-lowingtheorem:󰀁t+1t+1t−2Theorem1If∞<∞andlimt→∞μti,j(Ei,j−t=1μi,j(μi,j)Eti,j)=0∀i,j,thenAlgorithm1willconvergetotheopti-malsolutionfortheproblemofEqn.5.

TheproofofTheorem1isincludedintheappendix.

3.1Simulation-basedExperiments

Inthissubsection,weusesyntheticdatatodemonstratetheconvergenceofthealgorithmandselectionoftheparam-eters.ThecommoncomponentsandconditioncomponentsusedinthisexperimentareshowninFig.1(b,c),wheretheconditioncomponentsarefrom[16]andbothcomponentsarerescaledtorange[0,1].Thesparsecomponentsaresam-pledfromauniformdistributionintherangeof[0,1].Weusethosecomponentstogenerate25images,whichareusedinthisexperiment,asEqn.1.

Algorithm1inSec.2.2requiresthreeparameters,ρcon-trolstheconvergencespeed;λcontrolsthesparsityofthesparseresiduals;andτisascalar.In[11],theysuggestρ=1.5,τ=1.25andλ=√1forRobustPCA,wherem

misthewidthofXi,j.Wehavealsofoundthatλ=√1

m

isoptimalfromtheexperiments,thusweadoptthisselec-tioninourpaper.Fromtheexperiment,wefoundthatτ∈[0.125,2]andρ=1.25wouldbeanoptimalchoice.Fig.1showsanexampleoftherecoveredcommoncomponents

2.4FaceRecognitionUsingtheDecomposition

WiththecomponentsinEqn.1estimatedfromtheprevi-ousalgorithm,wenowdiscusshowtoclassifyatestimage.Recognizingthatthesparseresidualcapturesonlyimage-specificdetailsthathavenotbeenabsorbedbythecommonortheglobalcondition,wediscardthesparseresidualsfromthedecomposition(training)stageandkeeponlythecom-monandthelow-rankcomponents.

1472

(a)(a)

(b)(c)(b)(c)

(d)(e)

Figure2:(a)theinputdatawith10imagemanu-allyremoved,(b,c)isthecommoncomponentsandconditioncomponentsdecomposedfrom(a)accord-ingly.

commonsarelargelycleanpicturesofthesubjects,whiletheconditioncomponentsalignwellwiththegivenillumi-nationconditions.ThisexperimentshowsthecapabilityoftheproposedmethodwiththeRetinexmodeltodiscovertheilluminationconditionsandthesubjectcommonsfromasetofrealimages.

Next,werandomlypick32illuminationconditionsoutofthedecomposedconditionsandthecommoncomponentsofSubject1toformasubspaceasdescribedinEqn.14.Thenweusetheproposedmethodtoidentifywhetherannewimageisinthissubspace,byreconstructingthisimageasthelinearcombinationofthe“basis”ofthissubspace,i.e.,c1+aj.Fig.4showsanexample,wherethenewimageisalsopickedfromSubject1;andFig.5showsanotherexam-ple,wherethenewimageispickedfromSubject2.Theseexamplessuggestthatthelearnedcomponentscanbeusedforidentifyingwhichsubjectannewimagebelongsto.Sim-ilarly,thelearnedcomponentscanalsobeusedforiden-tifyingwhichconditionsthenewimageisassociatedwith.Thesetwoscenariosarefurtherevaluatedinthefollowingtwosubsections,withrealfaceimages.

Figure1:(a)showsthe25imagesgeneratedintheexperiment,wherethesparseparthas20%supportofeachimage.(b,c)showsthegroundtruthofthecommoncomponentsandconditioncomponentsac-cordingly.Wealsoshowthecommoncomponents(d)andconditioncomponents(e)decomposedfrom(b)whenρ=1.25andτ=2.

(d)andconditioncomponents(e)whenthesparseparthas20%supportoftheimage.2

Todemonstratetherobustnessofthealgorithm,whenonlypartofdataisavailable,werandomlyremove10imagesfromthe25images(Fig.2(a))andrunthealgorithmwiththesamesetofparameters.TheresultsareshowninFig.2,where(b)istherecoveredcommoncomponentsand(c)istherecoveredconditioncomponents.Theseresultssuggestthatthealgorithmisstillabletoproducereasonableresultsevenwith40%oftheimagesmissing.

3.2DecomposingaSetofImages

Inthissubsection,wefirstdemonstratethedecompositionofthesetofimagesfromExtendedYaleBdataset[4].Allthe2432imagesfrom38subjectsunderilluminationcondi-tionswereused.Thecommoncomponentsandthecondi-tioncomponentsareillustratedininFig.3.Comparingthesewiththeoriginaldata,itisevidentthattherecoveredTherecoveredpartsaresubjecttoalinearshiftandscaling.Weidentifytheparametersforthislinearshiftandscalingthenmapthembackwiththoseparameters.

2

3.3RecognizingtheFaceImages

Inthissubsection,wedemonstratetheperformanceoftheproposedmethodinfacerecognitiontask,withthecompar-isontoSRC,VolterrafaceandSUNontheextendedYaleBdatasetandCMUPIEdataset.Asthesetwodatasetsaredominatedbyilluminationconditions,weusetheRetinexmodelfortheproposedmethod,i.e.,theimageisconverted

1473

(a)(b)

Figure5:(a)thecoefficientforthelinearcombina-tion,(b)theinputimageand(c)thereconstructedimage.

mentsaresetto0andthecorrespondingimageswon’tbeusedfortraining.

TheExtendedYaleBdataset[4]containsN=38sub-jectswithimagesforeachsubject,whichcorrespondtoilluminationconditionsinthedataset.Theimagesarere-sizedto48×42.TheresultsontheextendedYaleBdatasetaresummarizedinTab.1.Fromthistable,wefindthattheproposedapproachandVolterrafacesachievethebestresults;andSUNgetobviouslythelowestaccuracy.TheperformanceofSRCdegradedramaticallyasthesizeofdic-tionary(i.e.,numberoftraininginstances)reduced.

TheCMUPIEdataset[17]containsN=68subjectswithvaryingposes,illuminationsandexpressionsetc..Foralltheimages,wemanuallycropthefaceregion,accordingtotheeyeposition,thenresizethemto50×35.TheresultsaresummarizedinTab.2.InExperiment1,all4methodsgetsimilarresults;inExperiment2,theproposedmethodandVolterrafacesgetthebestresult;andinExperiment3,theproposedapproachgetsthebestresult.Inaddition,theproposedmethodismorerobusttothemissingoftrainingimages.TheperformanceofSRCdegradesobviouslyasthesizeofdictionaryreduced.

Toillustratethespeedperformanceoftheproposedap-proach,wecomparedthetimerequiredtoclassifyoneim-ageinourapproachandtheSRCapproach.Thistimewasabout0.84secondsinourmethod,andabout1.59secondsinSRC.Thetimeforthedecomposition(i.e.,Algorithm1)islessthan5minutes.Themosttimeconsumingpartfortheproposedapproachisthesingularvaluedecomposition(SVD),whichisusedincomputingtheprincipleangle,soanefficientimplementationofSVDcanmaketheproposedalgorithmevenfaster.

Figure3:ThedecompositionoftheextendedYaleBdataset.Weuseallthe2432imageswhichcontain38subjects(b)andilluminationconditions(a).

Figure4:(a)thecoefficientforthelinearcombi-nation,(b)theinputimage,whichisnotobservedintheimagesfortrainingthe32illuminationcondi-tions,and(c)thereconstructedimage.

tologarithm.IntheSRCmethod,webuildthedictionarybycontainingallthetrainingimagesasitscolumns.SincethereisnocodepubliclyavailableforSRC,webuildourownimplementation.For󰀘1optimizationusedbySRC,weusedOrthonormalMatchingPursuit(OMP)[1]asthesolver.Wesetthenumberofnon-zeroelementsinthesparsecoefficient(referasKlater)tobetwicethenumberofconditionsinthetrainingdata.Inaddition,eachimageisnormalizedtohavezeromeanandunitl2normforSRC.ForVolterrafacesandSUN,weusetheauthor’soriginalimplementationandtheprovidedparameters.Foralltheresults,wepresentthebothmeanandstandarddeviationoftheaccuraciesof3roundsofexperiments.

Toexaminetherobustnessoftheapproacheswithrespecttotheamountoftrainingdata,weusethefollowingscheme.Intheexperiment,weonlypick“#trainpersubject”imagesforeachsubjectasthetraininginstances,accordingtotherandomlygeneratedsamplematrix,wheresomeoftheele-

3.4IdentifyingtheConditions

Finally,weuseanexperimenttoshowhowtheproposedmethodcanbeappliedtotoidentifyingtheconditionsthetestingimagesareassociatedwith.TheARdataset[14]containsN=100subjectsand26imagesforeachsubjects.Thedatasetcontains2sessions,whicharetakenatdifferenttimes.Eachsessioncontains13conditions:4forexpres-sions,3forilluminations,3forsunglassesand3forscarves.

1474

(a)Experiment1

#trainpersubjectProposedSRC

VolterrafacesSUN

#trainpersubjectProposedSRC

VolterrafacesSUN

322499.78±0.24%99.±0.04%96.48±0.44%95.29±0.52%99.95±0.06%99.80±0.26%.61±1.85%87.±2.80%

(b)Experiment216

99.56±0.00%.14±0.00%99.25±0.34%79.22±0.00%

12

99.33±0.23%87.88±0.44%99.17±0.39%76.75±0.00%

16

99.18±0.14%91.90±0.94%99.48±0.49%76.91±3.71%8

98.32±0.03%81.02±0.13%96.27±4.03%68.86±0.00%

8

95.15±1.03%78.65±1.81%90.22±11.84%60.17±2.09%4

80.03±2.17%58.±1.26%91.03±2.43%51.60±0.00%

Table1:TheresultsonextendedYaleBdataset.Experiment1:werandomlypickM=32illuminationconditionsfortrainingandtheremainingfortesting,i.e.,wewillobtainN=38commoncomponentsandM=32conditionsbytheproposedmethod.Experiment2:wemanuallypickM=16illuminationconditionsfortrainingandtheremainingfortesting.

4.CONCLUSIONSANDFUTUREWORK

Inthispaper,weproposedanoveldecompositionofasetoffaceimagesofmultiplesubjects,eachwithmultipleimages.Thedecompositionfindsacommonimageandalow-rankimageforeachofthesubjectsintheset.Allthelow-rankimagesformasetthatisusedtocaptureallpossi-bleglobalconditionsexistinginthesetofimages.Thisfa-cilitatesexplicitmodelingoftypicalchallengesinfacerecog-nition,suchasilluminationconditionsandlargeocclusion.Basedonthedecomposition,afaceclassifierwasdesigned,usingthedecomposedcomponentsforsubspacereconstruc-tionandcomparison.Theclassificationperformanceshowsthattheproposedapproachcanachievestate-of-the-artper-formance.Experimentsalsoshowedthattheproposedmethodisrobustwithmissingtrainingimages,whichcanbeanim-portantfactortoconsiderinapracticalsystem.Wealsodemonstratedwithexperimentsthatthedecompositionin-deedcapturesphysicallymeaningfulconditions,withbothsyntheticdataandrealdata.

Thereareafewpossibledirectionsforfurtherdevelop-mentofthework.Inparticular,thecurrentalgorithmas-sumesthatthelow-rankconditionsofthetrainingimagesareknownandgivenforeachofthem.Inpractice,ifthedatadonothavesuchimage-levellabel(butstillwithafinitesetoflow-rankconditions),itispossibletoexpandthecurrentalgorithmbyincorporatinganotherstepthatattemptstoestimateamappingmatrixforassigningacon-ditionlabeltoeachimage,duringtheoptimizationitera-tion.Forexample,wemaydefineamappingmatrixΦwithΦi,j=kdefiningthattrainingimageXi,jisassoci-atedwithconditionAk.Eqn.1suggestsaconstraintthatwemayusetosolveforΦ:theoptimalmappingmatrixshouldresultinthemostsparsityforEi,jorthelowestrankforAk,giventhesamereconstructionerror.Ifweusethefirstcriterion,the󰀂problemoffindingΦcanbeformulatedasΦ=argmin󰀍Xi,j−Ci−AΦi,j󰀍1.

Φ

i,j

Figure6:Theconfusionmatrix(inpercentage)

ofconditionrecognitionresultfromtheproposedmethod,wherebothaxesaretheconditionindex.Theaxisisindexoftheconditions.

Inourexperiments,weuseonesessionfortrainingandtheothersessionfortesting.Theimagesareconvertedtograyscaleandresizedto55×40.Torecognizedtheassociatedcondition,weslightlychangestheformulationofthesub-space:

󰀂

Si={x|x=wj×(ai+cj)∀w∈RN}(14)

Sy

={x|x=

󰀂

jj

wj×(y+cj)∀w∈RN}

(15)

whereSiisthesubspaceforconditioniandSythesubspace

forthetestimage.Theothersettingswerethesameasthoseofpreviousfacerecognitionexperiments.

Theproposedmethodachievesanaccuracyof91.77%inrecognizingtheconditions,withtheconfusionmatrixgiveninFig.6,whereweachievedover96%accuracyforallbutconditions1,2,3(3expressions)and12.Thisexperimentagaindemonstratestheeffectivenessoftheproposedmethodincapturingthephysicalconditionsintheformoflow-rankcomponents.

5.ACKNOWLEDGMENT

Theworkwassupportedinpartbyagrant(GrantNo.08469)fromtheNationalScienceFoundation.Anyopin-ions,findings,andconclusionsorrecommendationsexpressed

1475

(a)Experiment1

#trainpersubjectProposedSRC

VolterrafacesSUN

#trainpersubjectProposedSRC

VolterrafacesSUN

#trainpersubjectProposedSRC

VolterrafacesSUN

20

100±0.00%99.88±0.07%100±0.00%100%

(b)12

100±0.00%99.91±0.16%100±0.00%100±0.00%

(c)40

99.98±0.03%99.98±0.03%99.60±0.22%99.93±0.05%

15

100±0.00%99.88±0.07%100±0.00%99.84±0.11%Experiment29

99.960.08%98.±1.74%100±0.00%99.84±0.05%Experiment330

99.92±0.06%99.45±0.03%98.37±0.47%99.38±0.14%

10

99.65±0.37%99.73±0.14%100±0.00%99.45±0.43%6

99.17±0.15%96.90±3.73%99.±0.31%98.53±0.29%20

99.24±0.06%96.79±0.28%97.63±0.28%97.±0.30%

5

97.49±0.21%97.73±0.%95.83±4.16%95.75±0.49%3

94.70±0.20%87.18±1.78%94.30±4.72%88.75±4.72%10

90.95±0.70%86.98±0.16%.72±1.45%88.29±0.02%

Table2:TheresultonCMU-PIEdataset.Experiment1:wepicktheimageswithfrontalpose(C27),whichinclude43illuminationconditionsforeachsubject.WerandomlypickM=20conditionsfortrainingandtheremainingfortesting.Experiment2:weagainonlypicktheimagewithfrontalpose,butwerandomlypickM=12conditionsfortrainingandtheremainingfortesting.Experiment3:weusealltheimagesfrom5nearfrontalposes(C05,C07,C09,C27,C29),whichincludes153conditionsforeachsubject.WerandomlypickM=40conditionsfortrainingandtheremainingfortesting..inthismaterialarethoseoftheauthorsanddonotneces-sarilyreflecttheviewsoftheNationalScienceFoundation.

[9]R.Kumar,A.Banerjee,andB.Vemuri.Volterrafaces:

Discriminantanalysisusingvolterrakernels.InComputerVisionandPatternRecognition,2009.CVPR2009.IEEEConferenceon,pages150–155,june2009.

[10]K.Lee,J.Ho,andD.Kriegman.Acquiringlinear

subspacesforfacerecognitionundervariablelighting.IEEETransactionsonPatternAnalysisandMachineIntelligence,pages684–698,2005.

[11]Z.Lin,M.Chen,L.Wu,andY.Ma.Theaugmented

lagrangemultipliermethodforexactrecoveryofcorruptedlow-rankmatrices.ArxivpreprintarXiv:1009.5055,2010.

[12]J.Liu,S.Chen,andX.Tan.Fractionalordersingular

valuedecompositionrepresentationforface

recognition.PatternRecogn.,41:378–395,January2008.

[13]J.Mairal,F.Bach,J.Ponce,G.Sapiro,and

A.Zisserman.Discriminativelearneddictionariesforlocalimageanalysis.InComputerVisionandPatternRecognition,2008.CVPR2008.IEEEConferenceon,pages1–8.IEEE,2008.

[14]A.MartinezandR.Benavente.TheARfacedatabase.

Technicalreport,CVCTechnicalreport,1998.

[15]P.NageshandB.Li.Acompressivesensingapproach

forexpression-invariantfacerecognition.InComputerVisionandPatternRecognition,2009.CVPR2009.IEEEConferenceon,pages1518–1525,2009.

[16]J.PortillaandE.Simoncelli.Aparametrictexture

modelbasedonjointstatisticsofcomplexwaveletcoefficients.InternationalJournalofComputerVision,40(1):49–70,2000.

[17]T.Sim,S.Baker,andM.Bsat.TheCMUpose,

illumination,andexpression(PIE)database.In

6.REFERENCES

[1]M.Aharon,M.Elad,andA.Bruckstein.K-SVD:

Designofdictionariesforsparserepresentation.ProceedingsofSPARS,5,2005.

[2]J.F.Cai,E.J.Candes,andZ.Shen.Asingularvalue

thresholdingalgorithmformatrixcompletion.preprint,2008.

[3]E.CandesandY.Plan.Matrixcompletionwithnoise.

ProceedingsoftheIEEE,2009.

[4]A.Georghiades,P.Belhumeur,andD.Kriegman.

Fromfewtomany:Illuminationconemodelsforfacerecognitionundervariablelightingandpose.IEEETransactionsonPatternAnalysisandMachineIntelligence,23(6):3–660,2001.

[5]D.GoldfarbandS.Ma.Convergenceoffixed-point

continuationalgorithmsformatrixrankminimization.FoundationsofComputationalMathematics,pages1–28,2011.

[6]E.T.Hale,W.Yin,andY.Zhang.Fixed-point

continuationfor󰀘1-minimization:Methodologyandconvergence.SIAMJournalonOptimization,19(3):1107–1130,2008.

[7]C.KananandG.Cottrell.Robustclassificationof

objects,faces,andflowersusingnaturalimagestatistics.InComputerVisionandPattern

Recognition(CVPR),2010IEEEConferenceon,pages2472–2479,june2010.

[8]A.KnyazevandM.Argentati.Principalangles

betweensubspacesinanA-basedscalarproduct:

algorithmsandperturbationestimates.SIAMJournalonScientificComputing,23(6):2008–2040,2002.

1476

Proceedingsofthe5thInternationalConferenceonAutomaticFaceandGestureRecognition,2002.[18]J.Wright,A.Ganesh,S.Rao,Y.Peng,andY.Ma.

RobustPrincipalComponentAnalysis:Exact

RecoveryofCorruptedLow-RankMatricesviaConvexOptimization.InAdvancesinNeuralInformationProcessingSystems22.

[19]J.Wright,A.Yang,A.Ganesh,S.Sastry,andY.Ma.

Robustfacerecognitionviasparserepresentation.PatternAnalysisandMachineIntelligence,IEEETransactionson,31(2):210–227,2008.

Eqn.5.

˙t+1,E˙t+1),wehavethefollowing:˙t+1,AProofFor(C

˙t+1,E˙t+1,Y˙t,μt)=minL(C,A,E,Y˙t,μt)˙t+1,AL(C

C,A,E

≤≤

Ci+Aj+Ei,j=Xi,j,∀(i,j)Ci+Aj+Ei,j=Xi,j,∀(i,j)∗

min

˙t,μt)L(C,A,E,Y󰀂

i,j

min󰀍Aj󰀍∗+λi,j󰀍Ei,j󰀍1

=f

APPENDIXA.

PROOFOFTHEOREM1

˜t+1,Proposition1ThesequencesofYi,j

t+1˙andYareallbounded∀i,j,wherei,j

+1Yti,jt+1ˆi,jYt+1˜i,jYt+1˙i,jY

Wealsohave:

󰀂t+1

t+1˙j󰀍∗+λi,j󰀍E˙i,j󰀍A󰀍1

i,j

󰀁ˆt+1󰀁t+1

iYi,j,jYi,j

=≤

˙t+1,A˙t+1,E˙t+1,Y˙t,μt)−L(C

t˙i,j˙t−1󰀍2󰀂󰀍Y−YFi,j

t2μi,j

i,j

====

t+1+1+1t

μt−At−Eti,j(Xi,j−Ciji,j)+Yi,jtt+1+1tμt−Eti,j(Xi,j−Ci−Aji,j+Yi,jttt+1tμti,j(Xi,j−Ci−Aj−Ei,j)+Yi,j

+1t+1t˙t+1−A˙t˙i,j˙i,jμt−E)+Yi,j(Xi,j−Cij

t˙i,j˙t−1󰀍2󰀂󰀍Y󰀂t−1−YFi,j∗

f−=f+O((μi,j))t

2μi,ji,ji,j

˙t+1,A˙t+1,E˙t+1)istheoptimalsolutiontotheprob-and(C

tN,M˙t,μt)withY˙t={Y˙i,jlemminC,A,EL(C,A,E,Y}i,j=1.

ProofLet’swritetheLagrangefunctionin6as:

tttt

L({Cti}i,{Aj}j,{Ei,j}i,j,{Yi,j}i,j,{μ}i,j)󰀂t

󰀍Aj󰀍∗+λi,j󰀍Eti,j󰀍1i,j

=++

μti,jtt2󰀍Xi,j−Cti−Aj−Ei,j󰀍F2

ttt

˙t+1isbounded∀i,j.whereweusetheknowledgethatYi,j

󰀁∗∗˙˙i,jTaket→∞,wehavei,j󰀍Aj󰀍∗+λi,j󰀍E󰀍1=f∗.Using

t˙t−1)=μt−1(X˙i,j−C˙t−1−A˙t−1−E˙t−1)andbound-˙i,j−Y(Yi,ji,jiji,j

t+1∗˙˙˙∗˙∗ednessofYi,j∀i,j,wealsohaveXi,j−Ci−Aj−Ei,j=0˙∗,A˙∗,E˙∗)istheoptimalsolutionforEqn.5.∀i,j.Thus(C+1+1+1t+1tt

ByXi,j−Ct−At−Etiji,j=μi,j(Yi,j−Yi,j)and

t+1+1+1

+At+EtboundednessofYti,j,wehavelimt→∞Ciji,j=

Xi,j∀i,j,i.e.,(Ct+1,At+1,Et+1)approachestoafeasibleso-lution.Inaddition,wehave

󰀂t−1t+1󰀂t+1

t+1ˆi,j−Y˜i,jAj−At󰀍=󰀍(μi,j)(Y)󰀍F󰀍jF󰀁∞

t−1

Withtheassumption<∞,boundednessoft=1(μi,j)󰀁ˆt+1t+1t˜i,j,AandYhasalimitA∗j.Similarly:jiYi,j

󰀂t−1t+1󰀂t+1

t+1tt+1˜i,jAj−At+C−C󰀍=󰀍(μi,j)(Yi,j−Y)󰀍F󰀍jiFi󰀁+1t+1t+1

Thuslimt→∞jAt−At−Ctj+Cii=0.SinceAjj

t+1t+1

haslimitA∗haslimitC∗j,thenCii,thenEi,jhaslimit

∗∗∗∗

Xi,j−A∗j−Ci.So(C,A,E)isafeasiblesolution.

+1

ConsideringthesubgradientsandtheoptimalityofEti,j

+1ˆt+1∈∂󰀍At+1󰀍∗.˜t+1∈∂󰀍Et+1󰀍1and󰀁YandAt,wehaveYji,ji,jjii,j

Accordingtothepropertyofsubgradients:

󰀂󰀂t+1

+1t+1t+1˙j󰀍∗+λi,j󰀍E˙i,j(󰀍At󰀍+λ󰀍E󰀍)−(󰀍A󰀍1)∗i,j1ji,j

≤=−

󰀂󰀂

i,j

tt˜t+1,Y˜t+1−Y˜i,j˙t+1−Y˙i,j˜t+1,Y>

μtμti,ji,ji,ji,j

i,j

t+1˙t+1+1t+1˙t+1+1ˆi,j˜i,j−t+1t+1˙t+1−At−μt>i,jj

j

i

i

Forsimplicity,wewilluseL(Ct,At,Et+1,Yt,μt)insteadof

t+1ttt

L({Cti}i,{Aj}j,{Ei,j}i,j,{Yi,j}i,j,{μ}i,j).Thesubgra-dientofL(Ct,At,E,Yt,μt)overEi,jis

λi,j∂󰀍Ei,j󰀍1−

μti,j(Xi,j

−Cti

−Atj

−Ei,j)−

Yti,j

ttt+1AsEti,jisoptimalfortheproblemargminL(C,A,E,Y,μt)

Ei,j

˜t+10∈λi,j∂󰀍Eti,j󰀍1−Yi,j

˜t+1∈λi,j󰀍Et+1󰀍1;andaccordingtotheTheorem3ofi.e.,Yi,ji,j

t+1˜[11],Yi,jisbounded∀i,j.Similarly,wecanalsoshowthat󰀁t+1󰀁ˆt+1

˙t+1arebounded∀i,j.Y,ii,jjYi,jandYi,j

Proposition2Thesequencesof(Ct+1,At+1,Et+1)isbounded.ProofForAlgorithm1,wecanfindthat:L(C,A,E,Y,μ)≤L(Ct,At,Et+1,Yt,μt)=L(C,A,E,Y

t

t

t

t−1

t+1

t+1

t+1

t

t

t−1

)

󰀁∞t+1t−2

Byboundednessofassumptionthat∞andjYi,j∀i,j,wehaveL(C,A,E,Y,μ)is

󰀁t

upperbounded.Thusi,j󰀍Atj󰀍∗+λi,j󰀍Ei,j󰀍1isbounded.

˙∗,A˙∗,E˙∗)forProposition3Theaccumulationpoint(C

˙t+1,E˙t+1)isoptimalfortheproblemin˙t+1,Asequences(C

≤L(C,A,E,Y,μ)

≤L(Ct,At,Et,Yt,μt)

−1t󰀂μt

i,j+μi,jt−12

+󰀍Yti,j−Yi,j󰀍Ft2(μi,j)i,j

tt+1t+1tt

+1˙t+1arebounded;byByProposition1and2thatYti,j,Yi,j

󰀁∗˙∗˙∗Proposition3thati,j󰀍Aj󰀍∗+λi,j󰀍Ei,j󰀍1=f;andbyas-󰀁t+1t∗

sumptionlimt→∞μti,j(Ei,j−Ei,j)=0,wehavei,j󰀍Aj󰀍∗+

∗∗∗∗

λi,j󰀍E∗i,j󰀍1=f.Thatis(C,A,E)isoptimalfortheprobleminEqn.5.ThiscompletestheproofofTheorem1.

1477

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- igat.cn 版权所有 赣ICP备2024042791号-1

违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务