我爱计算机视觉

@AI52CV

微信公众号【我爱计算机视觉】的官方代码仓库

所有 个人的 我参与的
Forks 暂停/关闭的

    我爱计算机视觉/dino

    将Transformer引入CV届的Facebook 这次又有新发现:自监督学习 + Vision Transformers更配 自监督的Vision Transformers 自带图像语义分割的细节信息,当用于 ImageNet 分类Top 1 精度可达 80.1%! 其自监督+ViT系统DINO已开源。原地址:https://github.com/facebookresearch/dino

    我爱计算机视觉/PP-YOLOv2

    https://mp.weixin.qq.com/s/l5TSiuXIwOXJuLFNAmtQ8Q 源址:https://github.com/PaddlePaddle/PaddleDetection

    我爱计算机视觉/TimeSformer

    Facebook AI 提出新型视频理解架构:完全基于Transformer的首个视频理解架构,无需卷积,训练速度快、计算成本低。 https://mp.weixin.qq.com/s/e7MWXLo9jrpVMo7lm5Q86w 原地址:https://github.com/facebookresearch/TimeSformer

    我爱计算机视觉/PoseFormer

    PoseFormer:首个纯基于Transformer的 3D 人体姿态估计网络,性能达到 SOTA https://mp.weixin.qq.com/s/DKWSeRu_ThMf_vf9j1GCbQ 原地址:https://github.com/zczcwh/PoseFormer

    我爱计算机视觉/PVT

    大白话Pyramid Vision Transformer https://mp.weixin.qq.com/s/oJHZWmStQYYzEEOveiuQrQ 原地址:https://github.com/whai362/PVT

    我爱计算机视觉/dall-e

    引燃AI社区,不用跨界也能从文本生成图像,OpenAI新模型打破自然语言与视觉次元壁 https://mp.weixin.qq.com/s/cVxFqVz_RFhAINeGC4QkTw 原地址:https://github.com/openai/dall-e

    我爱计算机视觉/CLIP

    引燃AI社区,不用跨界也能从文本生成图像,OpenAI新模型打破自然语言与视觉次元壁 https://mp.weixin.qq.com/s/cVxFqVz_RFhAINeGC4QkTw 原地址:https://github.com/openai/CLIP

    我爱计算机视觉/vision_transformer

    告别CNN?一张图等于16x16个字,计算机视觉也用上Transformer了 https://mp.weixin.qq.com/s/JKC20zRleNVIHMpt-02uag 原地址:https://github.com/google-research/vision_transformer

    我爱计算机视觉/LSTR

    Transformer 又立功了!又快(420 fps)又好的车道线检测算法 https://mp.weixin.qq.com/s/hjHXWewRYh_6j5cd1J2n_w 原地址:https://github.com/liuruijin17/LSTR

    我爱计算机视觉/Swin-Transformer

    屠榜各大 CV 任务的微软 Swin Transformer,近日开源了代码和预训练模型。 This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows". 原地址:https://github.com/microsoft/Swin-Transformer

    我爱计算机视觉/MedMNIST

    MedMNIST:上海交大发布医学影像领域的MNIST https://zhuanlan.zhihu.com/p/270144930 原地址:https://github.com/MedMNIST/MedMNIST

    我爱计算机视觉/YOLObile

    This is the implementation of YOLObile: Real-Time Object Detection on Mobile Devices via Compression-Compilation Co-Design 原地址:https://github.com/nightsnack/YOLObile

    我爱计算机视觉/BASNet

    Code for CVPR 2019 paper. BASNet: Boundary-Aware Salient Object Detection 原地址:https://github.com/xuebinqin/BASNet

    我爱计算机视觉/U-2-Net

    The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection." 原地址:https://github.com/xuebinqin/U-2-Net

    我爱计算机视觉/nanodet

    ⚡Super fast and lightweight anchor-free object detection model. 🔥Only 1.8MB and run 97FPS on cellphone🔥 原地址:https://github.com/RangiLyu/nanodet

    我爱计算机视觉/automl-EfficientDet

    This repository contains a list of AutoML related models and libraries. EfficientDet: Scalable and Efficient Object Detection(https://github.com/google/automl/tree/master/efficientdet) 原地址:https://github.com/google/automl

    我爱计算机视觉/VIBE

    Official implementation of CVPR2020 paper "VIBE: Video Inference for Human Body Pose and Shape Estimation" 代码原地址:https://github.com/mkocabas/VIBE

    我爱计算机视觉/CVPR2020-OOH forked from 东南大学-王雁刚/CVPR2020-OOH

    This is the repository of the implemantation of cvpr 2020.

    我爱计算机视觉/pointlstm-gesture-recognition-pytorch

    This repo holds the codes of paper: An Efficient PointLSTM for Point Clouds Based Gesture Recognition (CVPR 2020). 代码原地址:https://github.com/Blueprintf/pointlstm-gesture-recognition-pytorch

    我爱计算机视觉/HOPE-Net

    Source code of CVPR 2020 paper, "HOPE-Net: A Graph-based Model for Hand-Object Pose Estimation" 代码原地址“https://github.com/bardiadoosti/HOPE

搜索帮助