1 Star 1 Fork 0

loic_wang/MMGCN

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
GPL-3.0
# MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video This is our Pytorch implementation for the paper: > Yinwei Wei, Xiang Wang, Liqiang Nie, Xiangnan He, Richang Hong, and Tat-Seng Chua(2019). MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video. In ACM MM`19, NICE, France,Oct. 21-25, 2019 ## Introduction Multi-modal Graph Convolution Network is a novel multi-modal recommendation framework based on graph convolutional networks, explicitly modeling modal-specific user preferences to enhance micro-video recommendation. ## Environment Requirement The code has been tested running under Python 3.5.2. The required packages are as follows: - Pytorch == 1.1.0 - torch-cluster == 1.4.2 - torch-geometric == 1.2.1 - torch-scatter == 1.2.0 - torch-sparse == 0.4.0 - numpy == 1.16.0 ## Example to Run the Codes The instruction of commands has been clearly stated in the codes. - Kwai dataset ```python train.py --model_name='MMGCN' --l_r=0.0005 --weight_decay=0.1 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False``` - Tiktok dataset `python train.py --model_name='MMGCN' --l_r=0.0005 --weight_decay=0.1 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False` - Movielens dataset `python train.py --model_name='MMGCN' --l_r=0.0001 --weight_decay=0.0001 --batch_size=1024 --dim_latent=64 --num_workers=30 --aggr_mode='mean' --num_layer=2 --concat=False` Some important arguments: - `model_name`: It specifies the type of model. Here we provide three options: 1. `MMGCN` (by default) proposed in MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video, ACM MM2019. Usage: `--model_name='MMGCN'` 2. `VBPR` proposed in [VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback](https://arxiv.org/abs/1510.01784), AAAI2016. Usage: `--model_name 'VBPR'` 3. `ACF` proposed in [Attentive Collaborative Filtering: Multimedia Recommendation with Item- and Component-Level Attention ](https://dl.acm.org/citation.cfm?id=3080797), SIGIR2017. Usage: `--model_name 'ACF'` 4. `GraphSAGE` proposed in [Inductive Representation Learning on Large Graphs](https://arxiv.org/abs/1706.02216), NIPS2017. Usage: `--model_name 'GraphSAGE'` 5. `NGCF` proposed in [Neural Graph Collaborative Filtering](https://arxiv.org/abs/1905.08108), SIGIR2019. Usage: `--model_name 'NGCF'` - `aggr_mode` It specifics the type of aggregation layer. Here we provide three options: 1. `mean` (by default) implements the mean aggregation in aggregation layer. Usage `--aggr_mode 'mean'` 2. `max` implements the max aggregation in aggregation layer. Usage `--aggr_mode 'max'` 3. `add` implements the sum aggregation in aggregation layer. Usage `--aggr_mode 'add'` - `concat`: It indicates the type of combination layer. Here we provide two options: 1. `concat`(by default) implements the concatenation combination in combination layer. Usage `--concat 'True'` 2. `ele` implements the element-wise combination in combination layer. Usage `--concat 'False'` ## Dataset We provide three processed datasets: Kwai, Tiktok, and Movielnes. - You can find the full version of recommendation datasets via [Kwai](https://www.kuaishou.com/activity/uimc), [Tiktok](http://ai-lab-challenge.bytedance.com/tce/vc/), and [Movielens](https://grouplens.org/datasets/movielens/). <!--- We select some users and micro-videos in [Kwai](https://drive.google.com/open?id=1Xk-ofNoDnwcZg_zYE5tak9s1iW195kY2) and [Tiktok](https://drive.google.com/open?id=1mlKTWugOr8TxRb3vq_-03kbr0olSJN_7) datasets accoding to the timestamp. - We extract the visual, acoustic, and textual features of all trailers in [Movielens](https://drive.google.com/open?id=1I1cHf9TXY88SbVCDhRiJV1drWX5Tc1-8) dataset. --> ||#Interactions|#Users|#Items|Visual|Acoustic|Textual| |:-|:-|:-|:-|:-|:-|:-| |Kwai|1,664,305|22,611|329,510|2,048|-|100| |Tiktok|726,065|36,656|76,085|128|128|128| |Movielens|1,239,508|55,485|5,986|2,048|128|100| -`train.npy` Train file. Each line is a user with her/his positive interactions with items: (userID and micro-video ID) -`val.npy` Validation file. Each line is a user with her/his 1,000 negative and several positive interactions with items: (userID and micro-video ID) -`test.npy` Test file. Each line is a user with her/his 1,000 negative and several positive interactions with items: (userID and micro-video ID) Copyright (C) <year> Shandong University This program is licensed under the GNU General Public License 3.0 (https://www.gnu.org/licenses/gpl-3.0.html). Any derivative work obtained under this license must be licensed under the GNU General Public License as published by the Free Software Foundation, either Version 3 of the License, or (at your option) any later version, if this derivative work is distributed to a third party. The copyright for the program is owned by Shandong University. For commercial projects that require the ability to distribute the code of this program as part of a program that cannot be distributed under the GNU General Public License, please contact <weiyinwei@hotmail.com> to purchase a commercial license.

简介

MMGCN: Multi-modal Graph Convolution Network forPersonalized Recommendation of Micro-video 展开 收起
GPL-3.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
1
https://gitee.com/loic_wang/MMGCN.git
git@gitee.com:loic_wang/MMGCN.git
loic_wang
MMGCN
MMGCN
master

搜索帮助