极市平台 / 待分类 / ICCV 2021接收结果出炉!最新40篇论文分方...

分享

   

ICCV 2021接收结果出炉!最新40篇论文分方向汇总(附打包下载)

2021-07-23  极市平台

报道丨极市平台

极市导读

 

ICCV2021结果出炉!你的论文中了吗? >>加入极市CV技术交流群,走在计算机视觉的最前沿


不久前,计算机视觉三大顶会之一ICCV2021接收结果已经公布,本次ICCV共计 6236 篇有效提交论文,其中有 1617 篇论文被接收,接收率为25.9%

接收论文ID:
https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRfaTmsNweuaA0Gjyu58H_Cx56pGwFhcTYII0u1pg0U7MbhlgY0R6Y-BbK3xFhAiwGZ26u3TAtN5MnS/pubhtml

虽然我们目前还只能看到官方公布的接收论文ID,具体的接收论文还不清楚。但是在论文结果出炉后,部分作者在社交媒体展示了自己被接收的工作,有些更是已经放出了开源代码。

极市平台对此次ICCV2021接收的论文进行了分类汇总,分为检测、分割、估计、跟踪、视觉定位、底层图像处理、图像视频检索、三维视觉等多个方向。所有关于ICCV2021的论文整理都汇总在了我们的Github项目中,该项目目前已收获1200 Star

这个Github项目将持续更新,项目地址(点击阅读原文即可跳转):

https://github.com/extreme-assistant/ICCV2021-Paper-Code-Interpretation/edit/master/ICCV2021.md

最新整理的40篇论文如下,在极市平台公众号后台回复“ICCV2021”,即可获得最新的ICCV2021论文合集下载。

神经网络结构设计(Neural Network Structure Design)

Transformer

[3] Rethinking Spatial Dimensions of Vision Transformers
paper:https://arxiv.org/abs/2103.16302
code:https://github.com/naver-ai/pit

[2] Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers(Oral)
paper:https://arxiv.org/pdf/2103.15679.pdf
code:https://github.com/hila-chefer/Transformer-MM-Explainability

[1] Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions(Oral)
paper:https://arxiv.org/abs/2102.12122
code:https://github.com/whai362/PVT
解读:PVT重磅升级:三点改进,性能大幅提升

检测

图像目标检测(2D Object Detection)

[5] Active Learning for Deep Object Detection via Probabilistic Modeling
paper:https://arxiv.org/abs/2103.16130

[4] Detecting Invisible People
paper:https://arxiv.org/abs/2012.08419
project:https://www.cs.cmu.edu/~tkhurana/invisible.htm
video:https://youtu.be/StEfnshXrCE

[3] Conditional Variational Capsule Network for Open Set Recognition
paper:https://arxiv.org/abs/2104.09159
code:https://github.com/guglielmocamporese/cvaecaposr

[2] MDETR : Modulated Detection for End-to-End Multi-Modal Understanding(Oral)
paper:https://arxiv.org/pdf/2104.12763
code:https://github.com/ashkamath/mdetr
project:https://ashkamath.github.io/mdetr_page/
colab:https://colab.research.google.com/github/ashkamath/mdetr/blob/colab/notebooks/MDETR_demo.ipynb

[1] DetCo: Unsupervised Contrastive Learning for Object Detection
paper:https://arxiv.org/abs/2102.04803
code:https://github.com/xieenze/DetCo

分割(Segmentation)

图像分割(Image Segmentation)

[2] Labels4Free: Unsupervised Segmentation using StyleGAN
paper:https://arxiv.org/abs/2103.14968
code:https://rameenabdal.github.io/Labels4Free
project:https://rameenabdal.github.io/Labels4Free/

[1] Mining Latent Classes for Few-shot Segmentation(Oral)
paper:https://arxiv.org/abs/2103.15402
code:https://github.com/LiheYoung/MiningFSS

实例分割(Instance Segmentation)

[2] Crossover Learning for Fast Online Video Instance Segmentation
code:https://github.com/hustvl/CrossVIS)

[1] Instances as Queries
paper:https://arxiv.org/abs/2105.01928
code:https://github.com/hustvl/QueryInst

语义分割(Semantic Segmentation)

[1] Calibrated Adversarial Refinement for Stochastic Semantic Segmentation
paper:https://arxiv.org/abs/2006.13144
code:https://github.com/EliasKassapis/CARSSS

GAN/生成式/对抗式(GAN/Generative/Adversarial)

[2] Labels4Free: Unsupervised Segmentation using StyleGAN
paper:https://arxiv.org/abs/2103.14968
code:https://rameenabdal.github.io/Labels4Free
project:https://rameenabdal.github.io/Labels4Free/)

[1] EigenGAN: Layer-Wise Eigen-Learning for GANs
paper:https://arxiv.org/abs/2104.12476
code:https://github.com/LynnHo/EigenGAN-Tensorflow

图像处理(Image Processing)

[1] Equivariant Imaging: Learning Beyond the Range Space(Oral)
paper:https://arxiv.org/pdf/2103.14756.pdf

超分辨率(Super Resolution)

[1] Learning for Scale-Arbitrary Super-Resolution from Scale-Specific Networks
paper:https://arxiv.org/abs/2004.03791
code:https://github.com/LongguangWang/ArbSR

风格迁移(Style Transfer)

[1] Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts(字体生成)
paper:https://arxiv.org/abs/2104.00887
code:https://github.com/clovaai/mxfont

估计(Estimation)

姿态估计(Human Pose Estimation)

[1] HuMoR: 3D Human Motion Model for Robust Pose Estimation(Oral)
paper:https://geometry.stanford.edu/projects/humor/docs/humor.pdf
video:https://youtu.be/5VWirxUHG0Y
project:https://geometry.stanford.edu/projects/humor/

图像&视频检索/理解(Image&Video Retrieval/Video Understanding)

行人重识别/检测(Re-Identification/Detection)

[1] TransReID: Transformer-based Object Re-Identification
paper:https://arxiv.org/abs/2102.04378
code:https://github.com/heshuting555/TransReID
解读:来自Transformer的降维打击:ReID各项任务全面领先,阿里&浙大提出TransReID

视觉定位(Visual Localization)

[2] TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization
paper:https://arxiv.org/abs/2103.14862
code:https://github.com/vasgaowei/TS-CAM

[1] Boundary-sensitive Pre-training for Temporal Localization in Videos
paper:https://arxiv.org/abs/2011.10830

图像匹配(Image Matching)

[1] COTR: Correspondence Transformer for Matching Across Images
paper:https://arxiv.org/abs/2103.14167)

三维视觉(3D Vision)

[1] MVTN: Multi-View Transformation Network for 3D Shape Recognition
paper:https://arxiv.org/abs/2011.13244)

目标跟踪(Object Tracking)

[1] Detecting Invisible People
paper:https://arxiv.org/abs/2012.08419
project:https://www.cs.cmu.edu/~tkhurana/invisible.htm
video:https://youtu.be/StEfnshXrCE

遥感图像(Remote Sensing Image)

[1] Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
paper:https://arxiv.org/abs/2103.16607
code:https://github.com/ElementAI/seasonal-contrast

场景图(Scene Graph

场景图生成(Scene Graph Generation)

[1] Unconstrained Scene Generation with Locally Conditioned Radiance Fields
paper:https://arxiv.org/abs/2104.00670

场景图预测(Scene Graph Prediction)

[1] Generative Compositional Augmentations for Scene Graph Prediction
paper:https://arxiv.org/abs/2007.05756
code:https://github.com/bknyaz/sgg

数据处理(Data Processing)

数据增广(Data Augmentation)

[1] MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
paper:https://arxiv.org/abs/2103.06132

异常检测(Anomaly Detection)

[1] Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning
paper:https://arxiv.org/abs/2101.10030
code:https://github.com/tianyu0207/RTFM

表征学习(Representation Learning)

[1] In-Place Scene Labelling and Understanding with Implicit Scene Representation(Oral)
paper:https://arxiv.org/abs/2103.15875
project:https://shuaifengzhi.com/Semantic-NeRF/

迁移学习(Transfer Learning)

[2] Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
paper:https://arxiv.org/abs/2103.16607
code:https://github.com/ElementAI/seasonal-contrast

[1] Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling
paper:https://arxiv.org/abs/2105.12441

度量学习(Metric Learning)

[1] Learning with Memory-based Virtual Classes for Deep Metric Learning
paper:https://arxiv.org/abs/2103.16940

增量学习(Incremental Learning)

[1] Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning
paper:https://arxiv.org/abs/2106.09701
code:https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL
project:https://jamessealesmith.github.io/project/dfcil/

对比学习(Contrastive Learning)

[1] CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
paper:https://arxiv.org/abs/2011.11183
code:https://github.com/salesforce/CoMatch

主动学习(Active Learning)

[1] Active Learning for Deep Object Detection via Probabilistic Modeling
paper:https://arxiv.org/abs/2103.16130

视觉推理/视觉问答(Visual Reasoning/VQA)

[2] On the hidden treasure of dialog in video question answering
paper:https://arxiv.org/abs/2103.14517

[1] Just Ask: Learning to Answer Questions from Millions of Narrated Videos(Oral)
paper:https://arxiv.org/abs/2012.00451
code:https://github.com/antoyang/just-ask
project:https://antoyang.github.io/just-ask.html

数据集(Dataset)

[1] 4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface(4D重建)
paper:https://arxiv.org/abs/2105.01905
dataset:https://github.com/rabbityl/DeformingThings4D)
video:https://youtu.be/QrSsVoTRpWk

其他分类

Pathdreamer: A World Model for Indoor Navigation(视觉导航)
paper:https://arxiv.org/abs/2105.08756

IPOKE: POKING A STILL IMAGE FOR CONTROLLED STOCHASTIC VIDEO SYNTHESIS
paper:https://arxiv.org/abs/2107.02790
code:https://github.com/CompVis/ipoke
project:https://compvis.github.io/ipoke/)

Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
paper:https://arxiv.org/abs/2104.00677
project:https://www.ajayj.com/dietnerf

KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
paper:https://arxiv.org/abs/2103.13744
code:https://github.com/creiser/kilonerf

极市一直非常关注各大视觉顶会,在每年都会对顶会资源进行整理,包括论文解读、代码、技术直播、分方向盘点、最佳论文汇总等,也得到了许多开发者的支持。在今年,我们也会对ICCV2021进行实时跟进。

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。

    0条评论

    发表

    请遵守用户 评论公约

    类似文章
    喜欢该文的人也喜欢 更多

    ×
    ×

    ¥.00

    微信或支付宝扫码支付:

    开通即同意《个图VIP服务协议》

    全部>>