Skip to content

Commit

Permalink
update docs (#186)
Browse files Browse the repository at this point in the history
  • Loading branch information
shenweichen authored Jun 14, 2021
1 parent b0296ea commit 74da9d6
Show file tree
Hide file tree
Showing 20 changed files with 79 additions and 61 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Steps to reproduce the behavior:
**Operating environment(运行环境):**
- python version [e.g. 3.5, 3.6]
- torch version [e.g. 1.6.0, 1.7.0]
- deepctr-torch version [e.g. 0.2.4,]
- deepctr-torch version [e.g. 0.2.7,]

**Additional context**
Add any other context about the problem here.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Add any other context about the problem here.
**Operating environment(运行环境):**
- python version [e.g. 3.6]
- torch version [e.g. 1.7.0,]
- deepctr-torch version [e.g. 0.2.4,]
- deepctr-torch version [e.g. 0.2.7,]
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

[![Documentation Status](https://readthedocs.org/projects/deepctr-torch/badge/?version=latest)](https://deepctr-torch.readthedocs.io/)
![CI status](https://github.com/shenweichen/deepctr-torch/workflows/CI/badge.svg)
[![codecov](https://codecov.io/gh/shenweichen/DeepCTR-Torch/branch/master/graph/badge.svg)](https://codecov.io/gh/shenweichen/DeepCTR-Torch)
[![codecov](https://codecov.io/gh/shenweichen/DeepCTR-Torch/branch/master/graph/badge.svg?token=m6v89eYOjp)](https://codecov.io/gh/shenweichen/DeepCTR-Torch)
[![Disscussion](https://img.shields.io/badge/chat-wechat-brightgreen?style=flat)](./README.md#disscussiongroup)
[![License](https://img.shields.io/github/license/shenweichen/deepctr-torch.svg)](https://github.com/shenweichen/deepctr-torch/blob/master/LICENSE)

Expand Down Expand Up @@ -41,6 +41,8 @@ Let's [**Get Started!**](https://deepctr-torch.readthedocs.io/en/latest/Quick-St
| IFM | [IJCAI 2019][An Input-aware Factorization Machine for Sparse Prediction](https://www.ijcai.org/Proceedings/2019/0203.pdf) |
| DCN V2 | [arxiv 2020][DCN V2: Improved Deep & Cross Network and Practical Lessons for Web-scale Learning to Rank Systems](https://arxiv.org/abs/2008.13535) |
| DIFM | [IJCAI 2020][A Dual Input-aware Factorization Machine for CTR Prediction](https://www.ijcai.org/Proceedings/2020/0434.pdf) |
| AFN | [AAAI 2020][Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions](https://arxiv.org/pdf/1909.03276) |



## DisscussionGroup & Related Projects
Expand All @@ -49,7 +51,7 @@ Let's [**Get Started!**](https://deepctr-torch.readthedocs.io/en/latest/Quick-St
<table style="margin-left: 20px; margin-right: auto;">
<tr>
<td>
公众号:<b>浅梦的学习笔记</b><br><br>
公众号:<b>浅梦学习笔记</b><br><br>
<a href="https://github.com/shenweichen/deepctr-torch">
<img align="center" src="./docs/pics/code.png" />
</a>
Expand All @@ -74,7 +76,7 @@ Let's [**Get Started!**](https://deepctr-torch.readthedocs.io/en/latest/Quick-St



## Contributors([welcome to join us!](./CONTRIBUTING.md))
## Main Contributors([welcome to join us!](./CONTRIBUTING.md))

<table border="0">
<tbody>
Expand Down Expand Up @@ -125,18 +127,18 @@ Let's [**Get Started!**](https://deepctr-torch.readthedocs.io/en/latest/Quick-St
<p>Dev<br>
NetEase <br> <br> </p>​
</td>
<td>
​ <a href="https://github.com/WeiyuCheng"><img width="70" height="70" src="https://github.com/WeiyuCheng.png?s=40" alt="pic"></a><br>
​ <a href="https://github.com/WeiyuCheng">Cheng Weiyu</a> ​
<p>Dev<br>
Shanghai Jiao Tong University</p>​
</td>
<td>
​ <a href="https://github.com/tangaqi"><img width="70" height="70" src="https://github.com/tangaqi.png?s=40" alt="pic"></a><br>
​ <a href="https://github.com/tangaqi">Tang</a>
<p>Test<br>
Tongji University <br> <br> </p>​
</td>
<td>
​ <a href="https://github.com/uestc7d"><img width="70" height="70" src="https://github.com/uestc7d.png?s=40" alt="pic"></a><br>
​ <a href="https://github.com/uestc7d">Xu Qidi</a> ​
<p>Dev<br>
University of <br> Electronic Science and <br> Technology of China</p>​
</td>
</tr>
</tbody>
</table>
2 changes: 1 addition & 1 deletion deepctr_torch/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
from . import models
from .utils import check_version

__version__ = '0.2.6'
__version__ = '0.2.7'
check_version(__version__)
7 changes: 3 additions & 4 deletions deepctr_torch/layers/activation.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ class Dice(nn.Module):
- [Zhou G, Zhu X, Song C, et al. Deep interest network for click-through rate prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018: 1059-1068.](https://arxiv.org/pdf/1706.06978.pdf)
- https://github.com/zhougr1993/DeepInterestNetwork, https://github.com/fanoping/DIN-pytorch
"""

def __init__(self, emb_size, dim=2, epsilon=1e-8, device='cpu'):
super(Dice, self).__init__()
assert dim == 2 or dim == 3
Expand All @@ -41,18 +42,16 @@ def forward(self, x):
x_p = self.sigmoid(self.bn(x))
out = self.alpha * (1 - x_p) * x + x_p * x
out = torch.transpose(out, 1, 2)

return out


class Identity(nn.Module):


def __init__(self, **kwargs):
super(Identity, self).__init__()

def forward(self, X):
return X
def forward(self, inputs):
return inputs


def activation_layer(act_name, hidden_size=None, dice_dim=2):
Expand Down
8 changes: 4 additions & 4 deletions deepctr_torch/layers/interaction.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ def __init__(self, filed_size, embedding_size, bilinear_type="interaction", seed
self.bilinear.append(
nn.Linear(embedding_size, embedding_size, bias=False))
elif self.bilinear_type == "interaction":
for i, j in itertools.combinations(range(filed_size), 2):
for _, _ in itertools.combinations(range(filed_size), 2):
self.bilinear.append(
nn.Linear(embedding_size, embedding_size, bias=False))
else:
Expand Down Expand Up @@ -487,9 +487,9 @@ def __init__(self, in_features, low_rank=32, num_experts=4, layer_num=2, device=
self.bias = nn.Parameter(torch.Tensor(self.layer_num, in_features, 1))

init_para_list = [self.U_list, self.V_list, self.C_list]
for i in range(len(init_para_list)):
for j in range(self.layer_num):
nn.init.xavier_normal_(init_para_list[i][j])
for para in init_para_list:
for i in range(self.layer_num):
nn.init.xavier_normal_(para[i])

for i in range(len(self.bias)):
nn.init.zeros_(self.bias[i])
Expand Down
40 changes: 20 additions & 20 deletions deepctr_torch/layers/sequence.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def forward(self, query, keys, keys_length, mask=None):
Output shape
- 3D tensor with shape: ``(batch_size, 1, embedding_size)``.
"""
batch_size, max_length, dim = keys.size()
batch_size, max_length, _ = keys.size()

# Mask
if self.supports_masking:
Expand Down Expand Up @@ -176,16 +176,16 @@ def __init__(self, k, axis, device='cpu'):
self.axis = axis
self.to(device)

def forward(self, input):
if self.axis < 0 or self.axis >= len(input.shape):
def forward(self, inputs):
if self.axis < 0 or self.axis >= len(inputs.shape):
raise ValueError("axis must be 0~%d,now is %d" %
(len(input.shape) - 1, self.axis))
(len(inputs.shape) - 1, self.axis))

if self.k < 1 or self.k > input.shape[self.axis]:
if self.k < 1 or self.k > inputs.shape[self.axis]:
raise ValueError("k must be in 1 ~ %d,now k is %d" %
(input.shape[self.axis], self.k))
(inputs.shape[self.axis], self.k))

out = torch.topk(input, k=self.k, dim=self.axis, sorted=True)[0]
out = torch.topk(inputs, k=self.k, dim=self.axis, sorted=True)[0]
return out


Expand Down Expand Up @@ -220,11 +220,11 @@ def __init__(self, input_size, hidden_size, bias=True):
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)

def forward(self, input, hx, att_score):
gi = F.linear(input, self.weight_ih, self.bias_ih)
def forward(self, inputs, hx, att_score):
gi = F.linear(inputs, self.weight_ih, self.bias_ih)
gh = F.linear(hx, self.weight_hh, self.bias_hh)
i_r, i_z, i_n = gi.chunk(3, 1)
h_r, h_z, h_n = gh.chunk(3, 1)
i_r, _, i_n = gi.chunk(3, 1)
h_r, _, h_n = gh.chunk(3, 1)

reset_gate = torch.sigmoid(i_r + h_r)
# update_gate = torch.sigmoid(i_z + h_z)
Expand Down Expand Up @@ -266,8 +266,8 @@ def __init__(self, input_size, hidden_size, bias=True):
self.register_parameter('bias_ih', None)
self.register_parameter('bias_hh', None)

def forward(self, input, hx, att_score):
gi = F.linear(input, self.weight_ih, self.bias_ih)
def forward(self, inputs, hx, att_score):
gi = F.linear(inputs, self.weight_ih, self.bias_ih)
gh = F.linear(hx, self.weight_hh, self.bias_hh)
i_r, i_z, i_n = gi.chunk(3, 1)
h_r, h_z, h_n = gh.chunk(3, 1)
Expand All @@ -293,25 +293,25 @@ def __init__(self, input_size, hidden_size, bias=True, gru_type='AGRU'):
elif gru_type == 'AUGRU':
self.rnn = AUGRUCell(input_size, hidden_size, bias)

def forward(self, input, att_scores=None, hx=None):
if not isinstance(input, PackedSequence) or not isinstance(att_scores, PackedSequence):
def forward(self, inputs, att_scores=None, hx=None):
if not isinstance(inputs, PackedSequence) or not isinstance(att_scores, PackedSequence):
raise NotImplementedError("DynamicGRU only supports packed input and att_scores")

input, batch_sizes, sorted_indices, unsorted_indices = input
inputs, batch_sizes, sorted_indices, unsorted_indices = inputs
att_scores, _, _, _ = att_scores

max_batch_size = int(batch_sizes[0])
if hx is None:
hx = torch.zeros(max_batch_size, self.hidden_size,
dtype=input.dtype, device=input.device)
dtype=inputs.dtype, device=inputs.device)

outputs = torch.zeros(input.size(0), self.hidden_size,
dtype=input.dtype, device=input.device)
outputs = torch.zeros(inputs.size(0), self.hidden_size,
dtype=inputs.dtype, device=inputs.device)

begin = 0
for batch in batch_sizes:
new_hx = self.rnn(
input[begin:begin + batch],
inputs[begin:begin + batch],
hx[0:batch],
att_scores[begin:begin + batch])
outputs[begin:begin + batch] = new_hx
Expand Down
5 changes: 3 additions & 2 deletions deepctr_torch/models/afn.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
"""
Author:
Weiyu Cheng, [email protected]
Reference:
[1] Cheng, W., Shen, Y. and Huang, L. 2020. Adaptive Factorization Network: Learning Adaptive-Order Feature
Interactions. Proceedings of the AAAI Conference on Artificial Intelligence. 34, 04 (Apr. 2020), 3609-3616.
Expand All @@ -14,7 +15,7 @@


class AFN(BaseModel):
"""Instantiates the Adaptive Factorization Network architecture.
"""Instantiates the Adaptive Factorization Network architecture.
In DeepCTR-Torch, we only provide the non-ensembled version of AFN for the consistency of model interfaces. For the ensembled version of AFN+, please refer to https://github.com/WeiyuCheng/DeepCTR-Torch (Pytorch Version) or https://github.com/WeiyuCheng/AFN-AAAI-20 (Tensorflow Version).
Expand All @@ -38,7 +39,7 @@ class AFN(BaseModel):

def __init__(self,
linear_feature_columns, dnn_feature_columns,
ltl_hidden_size=600, afn_dnn_hidden_units=(400, 400, 400),
ltl_hidden_size=256, afn_dnn_hidden_units=(256, 128),
l2_reg_linear=0.00001, l2_reg_embedding=0.00001, l2_reg_dnn=0,
init_std=0.0001, seed=1024, dnn_dropout=0, dnn_activation='relu',
task='binary', device='cpu', gpus=None):
Expand Down
2 changes: 1 addition & 1 deletion deepctr_torch/models/dien.py
Original file line number Diff line number Diff line change
Expand Up @@ -319,7 +319,7 @@ def __init__(self,
@staticmethod
def _get_last_state(states, keys_length):
# states [B, T, H]
batch_size, max_seq_length, hidden_size = states.size()
batch_size, max_seq_length, _ = states.size()

mask = (torch.arange(max_seq_length, device=keys_length.device).repeat(
batch_size, 1) == (keys_length.view(-1, 1) - 1))
Expand Down
2 changes: 1 addition & 1 deletion deepctr_torch/models/ifm.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def __init__(self,
self.to(device)

def forward(self, X):
sparse_embedding_list, dense_value_list = self.input_from_feature_columns(X, self.dnn_feature_columns,
sparse_embedding_list, _ = self.input_from_feature_columns(X, self.dnn_feature_columns,
self.embedding_dict)
if not len(sparse_embedding_list) > 0:
raise ValueError("there are no sparse features")
Expand Down
Binary file added docs/pics/AFN.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions docs/source/Features.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,6 +261,14 @@ Dual Inputaware Factorization Machines (DIFM) can adaptively reweight the origin

[Lu W, Yu Y, Chang Y, et al. A Dual Input-aware Factorization Machine for CTR Prediction[C]//IJCAI. 2020: 3139-3145.](https://www.ijcai.org/Proceedings/2020/0434.pdf)

### AFN(Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions)

Adaptive Factorization Network (AFN) can learn arbitrary-order cross features adaptively from data. The core of AFN is a logarith- mic transformation layer to convert the power of each feature in a feature combination into the coefficient to be learned.
[**AFN Model API**](./deepctr_torch.models.afn.html)

![AFN](../pics/AFN.jpg)

[Cheng, W., Shen, Y. and Huang, L. 2020. Adaptive Factorization Network: Learning Adaptive-Order Feature Interactions. Proceedings of the AAAI Conference on Artificial Intelligence. 34, 04 (Apr. 2020), 3609-3616.](https://arxiv.org/pdf/1909.03276)


## Layers
Expand Down
3 changes: 2 additions & 1 deletion docs/source/History.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# History
- 04/04/2021 : [v0.2.6](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.6) released.Add add [IFM](./Features.html#ifm-input-aware-factorization-machine) and [DIFM](./Features.html#difm-dual-input-aware-factorization-machine);Support multi-gpus running([example](./FAQ.html#how-to-run-the-demo-with-multiple-gpus)).
- 06/14/2021 : [v0.2.7](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.6) released.Add [AFN](./Features.html#afn-adaptive-factorization-network-learning-adaptive-order-feature-interactions) and fix some bugs.
- 04/04/2021 : [v0.2.6](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.6) released.Add [IFM](./Features.html#ifm-input-aware-factorization-machine) and [DIFM](./Features.html#difm-dual-input-aware-factorization-machine);Support multi-gpus running([example](./FAQ.html#how-to-run-the-demo-with-multiple-gpus)).
- 02/12/2021 : [v0.2.5](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.5) released.Fix bug in DCN-M.
- 12/05/2020 : [v0.2.4](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.4) released.Imporve compatibility & fix issues.Add History callback.([example](https://deepctr-torch.readthedocs.io/en/latest/FAQ.html#set-learning-rate-and-use-earlystopping)).
- 10/18/2020 : [v0.2.3](https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.3) released.Add [DCN-M](./Features.html#dcn-deep-cross-network)&[DCN-Mix](./Features.html#dcn-mix-improved-deep-cross-network-with-mix-of-experts-and-matrix-kernel).Add EarlyStopping and ModelCheckpoint callbacks([example](https://deepctr-torch.readthedocs.io/en/latest/FAQ.html#set-learning-rate-and-use-earlystopping)).
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
# The short X.Y version
version = ''
# The full version, including alpha/beta/rc tags
release = '0.2.6'
release = '0.2.7'


# -- General configuration ---------------------------------------------------
Expand Down
7 changes: 7 additions & 0 deletions docs/source/deepctr_torch.models.afn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
deepctr\_torch.models.afn module
================================

.. automodule:: deepctr_torch.models.afn
:members:
:no-undoc-members:
:no-show-inheritance:
1 change: 1 addition & 0 deletions docs/source/deepctr_torch.models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Submodules
deepctr_torch.models.dien
deepctr_torch.models.ifm
deepctr_torch.models.difm
deepctr_torch.models.afn

Module contents
---------------
Expand Down
6 changes: 3 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,16 +34,16 @@ You can read the latest code at https://github.com/shenweichen/DeepCTR-Torch and

News
-----
06/14/2021 : Add `AFN <./Features.html#afn-adaptive-factorization-network-learning-adaptive-order-feature-interactions>`_ and fix some bugs. `Changelog <https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.7>`_

04/04/2021 : Add `IFM <./Features.html#ifm-input-aware-factorization-machine>`_ and `DIFM <./Features.html#difm-dual-input-aware-factorization-machine>`_ . Support multi-gpus running(`example <./FAQ.html#how-to-run-the-demo-with-multiple-gpus>`_). `Changelog <https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.6>`_

02/12/2021 : Fix bug in DCN-M. `Changelog <https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.4>`_

12/05/2020 : Imporve compatibility & fix issues.Add History callback(`example <https://deepctr-torch.readthedocs.io/en/latest/FAQ.html#set-learning-rate-and-use-earlystopping>`_). `Changelog <https://github.com/shenweichen/DeepCTR-Torch/releases/tag/v0.2.3>`_

DisscussionGroup
-----------------------

公众号:**浅梦的学习笔记** wechat ID: **deepctrbot**
公众号:**浅梦学习笔记** wechat ID: **deepctrbot**

.. image:: ../pics/code.png

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

setuptools.setup(
name="deepctr-torch",
version="0.2.6",
version="0.2.7",
author="Weichen Shen",
author_email="[email protected]",
description="Easy-to-use,Modular and Extendible package of deep learning based CTR(Click Through Rate) prediction models with PyTorch",
Expand Down
9 changes: 4 additions & 5 deletions tests/models/AFN_test.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,16 @@
# -*- coding: utf-8 -*-
import pytest

from deepctr_torch.callbacks import EarlyStopping, ModelCheckpoint
from deepctr_torch.models import AFN
from tests.utils import get_test_data, SAMPLE_SIZE, check_model, get_device


@pytest.mark.parametrize(
'afn_dnn_hidden_units, sparse_feature_num, dense_feature_num',
[((256, 128), 3, 0),
((256, 128), 3, 3),
((256, 128), 0,3)]
[((256, 128), 3, 0),
((256, 128), 3, 3),
((256, 128), 0, 3)]
)

def test_AFN(afn_dnn_hidden_units, sparse_feature_num, dense_feature_num):
model_name = 'AFN'
sample_size = SAMPLE_SIZE
Expand All @@ -23,5 +21,6 @@ def test_AFN(afn_dnn_hidden_units, sparse_feature_num, dense_feature_num):

check_model(model, model_name, x, y)


if __name__ == '__main__':
pass
Loading

0 comments on commit 74da9d6

Please sign in to comment.