版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)
文檔簡介
技術(shù)創(chuàng)新,變革未來
深度學(xué)習(xí)下的圖像視頻處理技術(shù)看得更清,看得更懂目錄夜景增強圖像視頻去模糊視頻超分辨率1.
夜景圖像增強Takingphotosis
easyAmateurphotographerstypicallycreateunderexposedphotosPhotoEnhancementis
requiredImage
EnhancementInput“AutoEnhance”on
iPhone“AutoTone”in
LightroomOursExistingPhotoEditing
ToolsRetinex-based
MethodsLIME:[TIP
17]WVM:[CVPR
16]JieP:[ICCV17]Learning-based
MethodsHDRNet:[SIGGRAPH
17]White-Box:[ACMTOG
18]Distort-and-Recover:[CVPR
18]DPE:[CVPR
18]Previous
WorkInputWVM
[CVPR’16]JieP
[ICCV’17]HDRNet
[Siggraph’17]DPE
[CVPR’18]White-Box
[TOG’18]Distort-and-Recover
[CVPR’18]OursLimitationsofPrevious
MethodsIlluminationmapsfornaturalimagestypicallyhaverelativelysimpleformswithknown
priors.Themodelenablescustomizingtheenhancementresultsbyformulatingconstraintsonthe
illumination.WhyThis
Model?Advantage:EffectiveLearningandEfficient
LearningNetwork
ArchitectureInputNa?ve
RegressionExpert-retouchedAblation
StudyMotivation:Thebenchmarkdatasetiscollectedforenhancinggeneralphotosinsteadofunderexposedphotos,andcontainsasmallnumberofunderexposedimagesthatcoverlimitedlighting
conditions.Our DatasetQuantitative
Comparison:
Our DatasetMethodPSNRSSIMHDRNet26.330.743DPE23.580.737White-Box21.690.718Distort-and-Recover24.540.712Oursw/o????????,w/o????????,w/o
????????27.020.762Ourswith????????,w/o????????,w/o
????????28.970.783Ourswith????????,with????????,w/o
????????30.030.822Ours30.970.856MethodPSNRSSIMHDRNet28.610.866DPE24.660.850White-Box23.690.701Distort-and-Recover28.410.841Oursw/o????????,w/o????????,w/o
????????28.810.867Ourswith????????,w/o????????,w/o
????????29.410.871Ourswith????????,with????????,w/o
????????30.710.884Ours30.800.893QuantitativeComparison:MIT-Adobe
FiveKVisual
Comparison:
Our DatasetInputJiePHDRNetDPEWhite-boxDistort-and-RecoverOur
resultExpert-retouchedVisualComparison:MIT-Adobe
FiveKInputJiePHDRNetDPEWhite-boxDistort-and-RecoverOur
resultExpert-retouchedMoreComparisonResults:User
StudyInputWVMJiePHDRNetDPEWhite-BoxDistort-and-RecoverOur
resultLimitaionInputOur
result演示者2019-05-08
03:51:53--------------------------------------------Ourworkalsoexistssomelimitations,
thefirstlimitationistheregionisalmostblackwithoutanytraceoftexture.Wecanseethetoptwoimages.Thesecondlimitationisourmethoddoen’tclearnoiseintheenhanced
result.More
ResultsInputWhite-boxDistort-and-RecoverOur
resultExpert-retouchedJiePHDRNetDPEMore
ResultsInputWhite-boxDistort-and-RecoverOur
resultExpert-retouchedJiePHDRNetDPEMore
ResultsInputWhite-boxDistort-and-RecoverOur
resultExpert-retouchedJiePHDRNetDPEMore
ResultsInputWhite-boxDistort-and-RecoverOur
resultExpert-retouchedJiePHDRNetDPEMore
ResultsInputWVMJiePHDRNetDPEWhite-BoxDistort-and-RecoverOur
resultMore
ResultsInputWVMJiePHDRNetDPEWhite-BoxDistort-and-RecoverOur
resultMore
ResultsOur
resultiPhoneLightroomInputMore
ResultsOur
resultiPhoneLightroomInput2.
視頻超分辨率Oldand
FundamentalSeveraldecadesago[Huangetal,1984]→nearrecentMany
ApplicationsHDvideogenerationfromlow-res
sourcesMotivation演示者2019-05-08
03:51:55--------------------------------------------Thetargetofvideosuper-resolutionistoincreasetheresolutionofvideoswithrichdetails.
[click]Itisanoldandfundamentalproblem
thathasbeenstudiedsinceseveraldecadesago.
[click]VideoSRenablesmanyapplications,
suchasHigh-definitionvideogenerationfromlow-ressources.
[click]32Oldand
FundamentalSeveraldecadesago[Huangetal,1984]→nearrecentMany
ApplicationsHDvideogenerationfromlow-res
sourcesVideoenhancementwith
detailsMotivation演示者2019-05-08
03:51:55--------------------------------------------[click]Videoenhancementwith
details.Inthisexample,charactersontheroofandtexturesofthetreeinSRresultaremuchclearertheninput.
[click]33Oldand
FundamentalSeveraldecadesago[Huangetal,1984]→nearrecentMany
ApplicationsHDvideogenerationfromlow-res
sourcesVideoenhancementwith
detailsText/objectrecognitioninsurveillance
videosMotivation演示者2019-05-08
03:51:55--------------------------------------------[click]Andalso,itcanbenefittextorobject
recognitioninlow-qualitysurveillance
videos.Inthisexample,numbersonthecar
becomerecognizableonlyinthesuper-resolved
result.34ImageSRTraditional:[Freemanetal,2002],[Glasneretal,2009],[Yangetal,2010],etc.CNN-based:SRCNN[Dongetal,2014],VDSR[Kimetal,2016],FSRCNN[Dongetal,2016],
etc.Video
SRTraditional:3DSKR[Takedaetal,2009],BayesSR[Liuetal,2011],MFSR[Maetal,2015],
etc.CNN-based:DESR[Liaoetal,2015],VSRNet[Kappeler,etal,2016],
[Caballeroetal,2016],
etc.35Previous
Work演示者2019-05-08
03:51:56--------------------------------------------Previously,lotsofworkandmethods
havebeenproposedinsuper-resolution.
[click]Welistseveralrepresentativemethods
here.EffectivenessHowtomakegooduseofmultiple
frames?Remaining
Challenges39DatafromVid4[CeLiuet
al.]Bicubic
x4MisalignmentLargemotionOcclusion演示者2019-05-08
03:51:56--------------------------------------------Althoughvideosrhaslongbeen
studied,therearestillremainingchallenges
inthistask.
[click]Themostimportantoneis
effectiveness.
[click]Howtomakegooduseofmultipleframes?
[click][click]Asshowninthisexample,objectsinneighboringframesarenotaligned.Andinsomeextremecases,there
evenexistlargemotionorocclusion,whichareveryhardtohandle.Soaremultipleframesusefulorharmfultosuper-resolution?EffectivenessHowtomakegooduseofmultipleframes?Arethegenerateddetails
real?Remaining
Challenges40Image
SRBicubic
x4演示者2019-05-08
03:51:56--------------------------------------------[click]Ontheotherhand,arethegenerated
detailsrealdetails?
[click][click]CNN-basedSRmethodsincorporate
externaldata.Usingonlyoneframe,theycanalsoproducesharpstructures.Inthis
example,ontheright-hand-side,oneSRmethodgeneratessomeclearwindowpatternsonthebuilding,[click]buttheyarefarfromrealonthe
left.Theproblemis,detailsfromexternal
data,maynotbetrueforinputimage.EffectivenessHowtomakegooduseofmultipleframes?Arethegenerateddetails
real?Remaining
ChallengesImage
SRTruth演示者2019-05-08
03:51:56--------------------------------------------[click]Ontheotherhand,arethegenerated
detailsrealdetails?
[click][click]CNN-basedSRmethodsincorporate
externaldata.Usingonlyoneframe,theycanalsoproducesharpstructures.Inthis
example,ontheright-hand-side,oneSRmethodgeneratessomeclearwindowpatternsonthebuilding,[click]buttheyarefarfromrealonthe
left.Theproblemis,detailsfromexternal
data,maynotbetrueforinputimage.38EffectivenessHowtomakegooduseofmultipleframes?Arethegenerateddetails
real?ModelIssuesOnemodelforonesettingRemaining
ChallengesVDSR[Kimetal.,
2016]ESPCN[Shietal.,
2016]VSRNet[Kappeleretal,
2016]演示者2019-05-08
03:51:56--------------------------------------------[click]Therearealsomodelissuesincurrent
methods.
[click]ForallrecentCNN-basedSRmethods,
modelparametersarefixedforcertainscalefactors,ornumberofframes.Ifyouwanttochangescalefactors,youneedtochangenetworkconfigurationandtrainanother
one.39EffectivenessHowtomakegooduseofmultipleframes?Arethegenerateddetails
real?ModelIssuesOnemodelforonesettingIntensiveparametertuningSlow40Remaining
Challenges演示者2019-05-08
03:51:56--------------------------------------------[click][click]AndmosttraditionalvideoSRmethods
involveintensiveparametertuningandmaybeslow.Alltheissuesmentionedabovepreventthemfrompractical
usage.AdvantagesBetteruseofsub-pixel
motionPromisingresultsbothvisuallyand
quantitativelyFullyScalableArbitraryinputsizeArbitraryscale
factorArbitrarytemporal
frames41Our Method演示者2019-05-08
03:51:57--------------------------------------------Thegoalsofourmethodareasfollows.
[click]Wearetryingtomakebetteruse
of
sub-pixelmotionbetweenframesandproducehigh-qualityresultswithrealdetails.
[click]Wealsohopethedesignedframework
befullyscalable,intermsofinputimagesize,scalefactorsandframenumber.
[click]45DatafromVid4[CeLiuet
al.]演示者2019-05-08
03:51:57--------------------------------------------Hereisonevideo
example.Characters,numbersandtextures
arehardtorecognizeinbicubicresult.Andoursresultsaremuchbetterand
clearer.Motion
EstimationOur Method????????????0????????ME????????→0演示者2019-05-08
03:51:57--------------------------------------------Duetotimelimit,herewebrieflydescribe
ourmethod.Audiencesarewelcometoourpostersessionformoredetails.Ourmethodcontains3
components.[click]Thefirstmoduleisamotionestimation
network.
[click]Thismoduletake2low-resimages
asinput.
[click]Andoutputsalow-resmotionfield.
[click]43Sub-pixelMotionCompensation(SPMC)
LayerOur Method????????????0????????ME????????→0SPMC演示者2019-05-08
03:51:57--------------------------------------------[click]Thesecondmoduleisnewlydesigned.
Wecallitsub-pixelmotioncompensationlayer.
[click]Thismoduletakesasinputtheithlow-resframeandtheestimatedmotionfield.Theoutputofthismoduleisahigh-res
image.Unlikepreviousmethods,thislayer
simultaneouslyachieveresolutionenhancementandmotioncompensation,whichcanbetterkeepsubpixelinformationin
frames.44DetailFusion
NetOur Method????????????0????????ME????????→0SPMCEncoderDecoderConvLSTM????=?????
1????=????+
1skip
connections演示者2019-05-08
03:51:57--------------------------------------------[click]Inthelaststage,wedesignaDetail
FusionNetworktocombineallframes.
[click]Hereweuseencoder-decoder
structure
inthismodule,sinceitisprovedveryeffectiveinimageregressiontasks.Skipconnectionsareusedforbetterconvergence.[click]Theimportantchangeisthat,we
insert
aconvLSTMmoduleinsiderthenetwork.Itisanaturalchoicesincewe
are
handlingsequentialinputsandhopingtoutilizetemporalinformation.[click]TheConvLSTMconsidersinformation
fromprevioustimestep,andpasshiddenstatetonexttime
step.45ArbitraryInput
Size????????????0????????ME????????→0SPMCEncoderConvLSTM????=?????
1????=????+
1skip
connectionsInput
size:Fully
convolutionalDecoder演示者2019-05-08
03:51:57--------------------------------------------Ourproposedframeworkhastheadvantageoffullyscalability.
[click]Inputvideosmaybeofdifferentsizes
inpractise.
[click]Sinceournetworkisfullyconvolutional,
itcannaturalhandle
this.46ArbitraryScale
Factors2×3×4×ParameterFree????????????0????????ME????????→0SPMCEncoderConvLSTM????=?????
1????=????+
1skip
connectionsDecoder演示者2019-05-08
03:51:58--------------------------------------------[click]Whendealingwithdifferentscale
factors,previousnetworksneedtochangenetworkparameters.[click]Ournetworkisdifferentsincethe
resolutionincreasehappensinSPMClayer,anditisparameterfree.[click]Thispropertyenablesustouseone
singlemodelconfigurationtohandleallscalefactors,includingnon-integer
values.47ArbitraryTemporal
Length3
frames5
frames????????????0????????ME????????→0SPMCEncoderConvLSTM????=?????
1????=????+
1skip
connectionsDecoder演示者2019-05-08
03:51:58--------------------------------------------[click]Forpracticalsystems,wemaywant
tochoosethenumberofframesintestingphase,inordertoachievebalancebetweenqualityandefficiency.OurframeworkusesConvLSTMtohandleframesinasequential
way.[click]Therefore,itcanacceptarbitrarytemporal
length.48Detailsfrom
multi-framesAnalysisOutput
(identical)3
identicalframes演示者2019-05-08
03:51:58--------------------------------------------Wedoanalysistoevaluateourmethod.
[click]First,areourrecovereddetailsreal?
[click]Hereweusethreeidenticalframes
asinputtoour
network.Theinformationcontainedinthisinput
isnomorethanonesinglelow-resimage.
[click]Asexpected,althoughsharper,the
outputcontainsnomoredetails.Andthecharactersandlogoarestill
unrecognizable.49Detailsfrom
multi-framesAnalysis3
consecutiveframesOutput
(consecutive)Output
(identical)50演示者2019-05-08
03:51:58--------------------------------------------[click]However,ifwetake3consecutive
framesfromthevideoasinput.
[click]Ournetworkproducesmuchbetter
results.Charactersandlogoareverycleartobe
read.Thisexperimentprovesthatthesharp
structuresrecoveredcomefromrealinformationofinputs,ratherthenfromexternalinformationinthenetwork.WewillbesafetotrusttheSR
results.AblationStudy:SPMCLayerv.s.BaselineAnalysisOutput
(baseline)????????→0BWResizeBackward
warping+Resize(baseline)51演示者2019-05-08
03:51:58--------------------------------------------[click]Inthenextexperiment,wedoablation
studyofourSPMClayer.
[click]WesubstituteSPMClayerwitha
baselinemodule,thatisabackwardwarpingfollowedby
upsampling.Thisbaselinemethodcanalsocompensate
motionandincreaseresolution.
ItiswidelyadoptedinpreviousCNN-basedmethods.
[click]Inthisexample,thetilesontheroof
containseverelyfalsestructuresdueto
aliasing.AblationStudy:SPMCLayerv.s.BaselineAnalysisOutput
(SPMC)????????→0SPMCSPMCOutput
(baseline)52演示者2019-05-08
03:51:58--------------------------------------------[click]WithourdesignedSPMClayer,[click]thestructuresoftilesintheresult
areveryfaithfultotheground
truth.Webelieveonlybyproperlyhandling
motioninsub-pixelprecision,
canwerecovergood
results.ComparisonsBicu5b6ic
x4演示者2019-05-08
03:51:59--------------------------------------------Wefurthercomparewithcurrentstate-of-the-arts.
Thisisthebicubicinterpolatedversionof
input.Thewindowsandglassofthebuilding
areseverely
blurred.ComparisonsBayesSR[Liuetal,257011;Maetal.,
2015]演示者2019-05-08
03:51:59--------------------------------------------TheresultofBayesianSRissharp,
butthestructuresarestill
missing.ComparisonsDESR[Liao58etal.,
2015]演示者2019-05-08
03:51:59--------------------------------------------Draft-ensembleSRrecoversafew
details,butwith
artifacts.ComparisonsVSRNet[Kapp5e9leretal,
2016]演示者2019-05-08
03:51:59--------------------------------------------OnerecentCNN-basedVSRNetproduces
smooth
result.ComparisonsOu60rs演示者2019-05-08
03:51:59--------------------------------------------Visually,ourresultismuchbetter.Theedgesofthebuildingsandwindowsareeasytodistinguish.Wethengobackto
input.[click]ComparisonsBicu6b1ic
x4演示者2019-05-08
03:51:59--------------------------------------------Then
our
results.[click]ComparisonsOu62rs演示者2019-05-08
03:51:59--------------------------------------------Thechangesare
obvious.Running
Time60演示者2019-05-08
03:52:00--------------------------------------------Wecomparerunningtimewithmost
ofthecurrentmethods
[click]BayesSR[Liuetal,
2011]Running
Time2hour/
frameFrames:
31ScaleFactor:
4×演示者2019-05-08
03:52:00--------------------------------------------BayesianSRmethodneeds2hours
toproduceoneframe,asreportedintheir
paper.61MFSR[Maetal,
2015]Running
Time1062min/
frameFrames:
31ScaleFactor:
4×演示者2019-05-08
03:52:00--------------------------------------------MFSRmethodrequires10minper
frame.DESR[Liaoetal,
2015]Running
TimeFrames:
31ScaleFactor:
4×638min/
frame演示者2019-05-08
03:52:00--------------------------------------------DraftensambleSRrequires8
minutes.VSRNet[Kappeleretal,
2016]Running
Time4064s/
frameFrames:5ScaleFactor:
4×演示者2019-05-08
03:52:00--------------------------------------------VSRNetneeds40secondper
frame.Ours(5
frames)Running
Time0.19s
/frameFrames:
5ScaleFactor:
4×65演示者2019-05-08
03:52:00--------------------------------------------Ourframeworkismuchfastersince
allcomponentscanbeefficientlycomputedonGPU.Itrequires0.19susingneighboring5frames.Ours(3
frames)Running
Time0.14s
/frameFrames:
3ScaleFactor:
4×66演示者2019-05-08
03:52:00--------------------------------------------Itcanbefurtheracceleratedto0.14
secondifweuse3
frames.More
Results67演示者2019-05-08
03:52:01--------------------------------------------Hereweshowmorevideo
results.演示者2019-05-08
03:52:01--------------------------------------------Inthisfirstresult,ourmethodworks
verywell,especiallyonedgesofthe
building.68演示者2019-05-08
03:52:01--------------------------------------------Inthenextresult,tilesofthetemple
andcarvesonthelamparemostlyrecovered.69SummaryEnd-to-end&fully
scalableNewSPMC
layerHigh-quality&fast
speed????????????0????????ME????????→0SPMCEncoderConvLSTM????=?????
1????=????+
1skip
connectionsDecoder演示者2019-05-08
03:52:01--------------------------------------------Insummary,weproposeanewend-to-end
CNN-basedframeworkforvideoSR,whichisfullyscalable.
[click]OurframeworkincludesanewSPMC
layerthatcanbetterhandleinter-framemotion.
[click]Ourmethodproduceshigh-quality
resultswithfast
speed.703.
圖像視頻去模糊圖像去模糊問題75Datafromprevious
work演示者2019-05-08
03:52:02--------------------------------------------Thetargetofvideosuper-resolutionistoincreasetheresolutionofvideoswithrichdetails.
[click]Itisanoldandfundamentalproblem
thathasbeenstudiedsinceseveraldecadesago.
[click]VideoSRenablesmanyapplications,
suchasHigh-definitionvideogenerationfromlow-ressources.
[click]DifferentBlurAssumptionsUniform:[Fergusetal,2006],[Shanetal,2009],[Choetal,2009],[Xuetal,2010],
etc.Previous
Work76Datafrom[Xuetal,
2010]演示者2019-05-08
03:52:02--------------------------------------------Previously,lotsofworkandmethods
havebeenproposedinsuper-resolution.
[click]Welistseveralrepresentativemethods
here.DifferentBlurAssumptionsNon-uniform:[Whyteetal,2010],[Hirschetal,2011],[Zhengetal,2013],
etc.Previous
Work77Datafrom[Whyteetal,
2010]演示者2019-05-08
03:52:02--------------------------------------------Previously,lotsofworkandmethods
havebeenproposedinsuper-resolution.
[click]Welistseveralrepresentativemethods
here.DifferentBlurAssumptionsDynamic:[Kimetal,2013],[Kimetal,2014],[Nahetal,2017],
etc.Previous
Work78Datafrom[Kimetal,
2013]演示者2019-05-08
03:52:02--------------------------------------------Previously,lotsofworkandmethods
havebeenproposedinsuper-resolution.
[click]Welistseveralrepresentativemethods
here.Learning-based
methodsEarlymethods:[Sunetal,2015],[Schuleretal,2016],[Xiaoetal,2016],etc.Substituteafewtraditionalmoduleswithlearned
parametersMorerecent:[Nahetal,2017],[Kimetal,2017],[Suetal,2017],[Wiescholleketal,2017]Network:encoder-decoder,multi-scale,
etc.Previous
Work79演示者2019-05-08
03:52:02--------------------------------------------Previously,lotsofworkandmethods
havebeenproposedinsuper-resolution.
[click]Welistseveralrepresentativemethods
here.ComplicatedReal-world
BlurRemaining
Challenges80DatafromGOPRO
dataset演示者2019-05-08
03:52:02--------------------------------------------Althoughvideosrhaslongbeen
studied,therearestillremainingchallenges
inthistask.
[click]Themostimportantoneis
effectiveness.
[click]Howtomakegooduseofmultipleframes?
[click][click]Asshowninthisexample,objectsinneighboringframesarenotaligned.Andinsomeextremecases,there
evenexistlargemotionorocclusion,whichareveryhardtohandle.Soaremultipleframesusefulorharmfultosuper-resolution?Ill-posedProblem&UnstableSolversArtifacts:ringing,noise,
etc.Remaining
Challenges81Datafrom[Moslehetal,
2014]inaccurate
kernelsinaccurate
modelsinformationlossunstable
solvers演示者2019-05-08
03:52:02--------------------------------------------[click]Ontheotherhand,arethegenerated
detailsrealdetails?
[click][click]CNN-basedSRmethodsincorporate
externaldata.Usingonlyoneframe,theycanalsoproducesharpstructures.Inthis
example,ontheright-hand-side,oneSRmethodgeneratessomeclearwindowpatternsonthebuilding,[click]buttheyarefarfromrealonthe
left.Theproblemis,detailsfromexternal
data,maynotbetrueforinputimage.EfficientNetwork
StructureU-Netorencoder-decodernetwork[Suetal,
2017]Remaining
Challenges82InputOutputconvskip
connection演示者2019-05-08
03:52:03--------------------------------------------[click]Ontheotherhand,arethegenerated
detailsrealdetails?
[click][click]CNN-basedSRmethodsincorporate
externaldata.Usingonlyoneframe,theycanalsoproducesharpstructures.Inthis
example,ontheright-hand-side,oneSRmethodgeneratessomeclearwindowpatternsonthebuilding,[click]buttheyarefarfromrealonthe
left.Theproblemis,detailsfromexternal
data,maynotbetrueforinputimage.EfficientNetwork
StructureMulti-scaleorcascadedrefinementnetwork[Nahetal,
2017]Remaining
Challenges83Outputconvconvinputfine
stagecoarse
stageresize
up演示者2019-05-08
03:52:03--------------------------------------------[click]Ontheotherhand,arethegenerated
detailsrealdetails?
[click][click]CNN-basedSRmethodsincorporate
externaldata.Usingonlyoneframe,theycanalsoproducesharpstructures.Inthis
example,ontheright-hand-side,oneSRmethodgeneratessomeclearwindowpatternsonthebuilding,[click]buttheyarefarfromrealonthe
left.Theproblemis,detailsfromexternal
data,maynotbetrueforinputimage.MeritsinCoarse-to-fineStrategyEachscalesolvethesame
problemSolverandparametersateachscaleareusuallythe
sameOur MethodSolverSolver演示者2019-05-08
03:52:03--------------------------------------------Thegoalsofourmethodareasfollows.
[click]Wearetryingtomakebetteruse
of
sub-pixelmotionbetweenframesandproducehigh-qualityresultswithrealdetails.
[click]Wealsohopethedesignedframework
befullyscalable,intermsofinputimagesize,scalefactorsandframenumber.
[click]81Scale-recurrent
NetworkOur Method????3????2inputSolver????3SolverSolver????2????1????1EBlocksDBlocksSolverconvResBlockResBlockResBlockResBlockEBlocksResBloc
ResBlock kDBlocksdeconv演示者2019-05-08
03:52:03--------------------------------------------Thegoalsofourmethodareasfollows.
[click]Wearetryingtomakebetteruse
of
sub-pixelmotionbetweenframesandproducehigh-qualityresultswithrealdetails.
[click]Wealsohopethedesignedframework
befullyscalable,intermsofinputimagesize,scalefactorsandframenumber.
[click]8286DatafromGOPRO
dataset演示者2019-05-08
03:52:03--------------------------------------------Hereisonevideo
example.Characters,numbersandtextures
arehardtorecognizeinbicubicresult.Andoursresultsaremuchbetterand
clearer.UsingDifferentNumberof
ScalesAnalysis1
scaleInput2
scales3
scales84演示者2019-05-08
03:52:04--------------------------------------------Wedoanalysistoevaluateourmethod.
[click]First,areourrecovereddetailsreal?
[click]Hereweusethreeidenticalframes
asinputtoour
network.Theinformationcontainedinthisinput
isnomorethanonesinglelow-resimage.
[click]Asexpected,althoughsharper,the
outputcontainsnomoredetails.Andthecharactersandlogoarestill
unrecognizable.Baseline
ModelsAnalysisModelSSSCw/o
RRNNSR-FlatParam2.73M8.19M2.73M3.03M2.66MPSNR28.4029.0529.2629.3527.53Solver????1????1EBlocksDBlocksSolverSingleScale
(SS)85演示者2019-05-08
03:52:04--------------------------------------------[click]However,ifwetake3consecutive
framesfromthevideoasinput.
[click]Ournetworkproducesmuchbetter
results.Charactersandlogoareverycleartobe
read.Thisexperimentprovesthatthesharp
structuresrecoveredcomefromrealinformationofinputs,ratherthenfromexternalinformationinthenetwork.WewillbesafetotrusttheSR
results.Baseline
ModelsAnalysisModelSSSCw/o
RRNNSR-FlatParam2.73M8.19M2.73M3.03M2.66MPSNR28.4029.0529.2629.3527.53EBlocksDBlocksSolverScaleCascaded
(SC)????3????2Solver1????3Solver
2Solver
3????2????1????1演示者2019-05-08
03:52:04--------------------------------------------[click]However,ifwetake3consecutive
framesf
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- 商場泔水清運專項服務(wù)合同
- 二零二五年度寶石匠人珠寶店珠寶行業(yè)法律咨詢合同
- 廚衛(wèi)改造工程合同樣本
- 旅游規(guī)劃與設(shè)計行業(yè)智能化旅游目的地打造方案
- 電子通訊網(wǎng)絡(luò)工程指南
- 職業(yè)病診斷與鑒定作業(yè)指導(dǎo)書
- 三農(nóng)產(chǎn)品流通體系國際化與走出去戰(zhàn)略作業(yè)指導(dǎo)書
- 三農(nóng)田灌溉管理方案
- 多應(yīng)用臨時借款合同常用
- 房產(chǎn)歸男方無債務(wù)離婚協(xié)議書
- 2024年全國統(tǒng)一高考英語試卷(新課標(biāo)Ⅰ卷)含答案
- 2024年認(rèn)證行業(yè)法律法規(guī)及認(rèn)證基礎(chǔ)知識 CCAA年度確認(rèn) 試題與答案
- 2022屆“一本、二本臨界生”動員大會(2023.5)
- 肝臟炎性假瘤的影像學(xué)表現(xiàn)培訓(xùn)課件
- 國家行政機關(guān)公文格式課件
- 耐壓絕緣硅橡膠涂料噴涂作業(yè)指導(dǎo)書
- 小學(xué)《體育與健康》 人教版 三年級 乒乓球運動 -乒乓球介紹與球性教學(xué) 第一節(jié)課PPT 課件
- 急性心梗的護理業(yè)務(wù)學(xué)習(xí)課件
- 導(dǎo)向標(biāo)識系統(tǒng)設(shè)計(二)課件
- 聚焦:如何推進(jìn)教育治理體系和治理能力現(xiàn)代化
- 化工儀表自動化【第四章】自動控制儀表
評論
0/150
提交評論