From 91d19ac2bb890c86e265d71f397f552afd980da9 Mon Sep 17 00:00:00 2001 From: LuoJianing Date: Tue, 9 May 2023 16:22:12 +0800 Subject: [PATCH] update API docs rules --- guidelines/docs_comments_guidelines_en.md | 48 +++++++------- guidelines/docs_comments_guidelines_zh_cn.md | 62 +++++++++--------- .../images}/list_table.png | Bin 3 files changed, 55 insertions(+), 55 deletions(-) rename {security/resource => guidelines/images}/list_table.png (100%) diff --git a/guidelines/docs_comments_guidelines_en.md b/guidelines/docs_comments_guidelines_en.md index 6934ba5..52d7896 100644 --- a/guidelines/docs_comments_guidelines_en.md +++ b/guidelines/docs_comments_guidelines_en.md @@ -42,7 +42,7 @@ Note: Note description. Args: - Arg1 (Type): Description. Default: xxx. + Arg1 (Type): Description. Default: ``xxx``. Arg2 (Type): Description. - Sub-argument1 or Value1 of Arg2: Description. @@ -103,9 +103,9 @@ Supported Platforms: Args: network (Cell): A training or testing network. - loss_fn (Cell): Objective function, if `loss_fn` is None, the + loss_fn (Cell): Objective function, if `loss_fn` is ``None``, the network should contain the logic of loss and grads calculation, and the logic - of parallel if needed. Default: None. + of parallel if needed. Default: ``None``. Returns: function, original function. @@ -116,16 +116,16 @@ Supported Platforms: ```text Args: amp_level (str): Option for argument `level` in `mindspore.amp.build_train_network`, level for mixed - precision training. Supports ["O0", "O2", "O3", "auto"]. Default: "O0". + precision training. Supports ``"O0"``, ``"O2"``, ``"O3"``, ``"auto"``. Default: ``"O0"``. - - O0: Do not change. - - O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale. - - O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'. - - auto: Set to level to recommended level in different devices. Set level to "O2" on GPU, set - level to "O3" Ascend. The recommended level is choose by the export experience, cannot + - ``O0``: Do not change. + - ``O2``: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale. + - ``O3``: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'. + - ``auto``: Set to level to recommended level in different devices. Set level to ``"O2"`` on GPU, set + level to ``"O3"`` Ascend. The recommended level is choose by the export experience, cannot always generalize. User should specify the level for special network. - "O2" is recommended on GPU, "O3" is recommended on Ascend. + ``"O2"`` is recommended on GPU, ``"O3"`` is recommended on Ascend. ``` - Space requirements: @@ -136,8 +136,8 @@ Supported Platforms: Args: lr_power (float): Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero. - use_locking (bool): If True, the var and accumulation tensors will be protected from being updated. - Default: False. + use_locking (bool): If ``True``, the var and accumulation tensors will be protected from being updated. + Default: ``False``. Raises: TypeError: If `lr`, `l1`, `l2`, `lr_power` or `use_locking` is not a float. @@ -152,14 +152,14 @@ Supported Platforms: ```text Args: - parallel_mode (str): There are five kinds of parallel modes, "stand_alone", "data_parallel", - "hybrid_parallel", "semi_auto_parallel" and "auto_parallel". Default: "stand_alone". + parallel_mode (str): There are five kinds of parallel modes, ``"stand_alone"``, ``"data_parallel"``, + ``"hybrid_parallel"``, ``"semi_auto_parallel"`` and ``"auto_parallel"``. Default: ``"stand_alone"``. - - stand_alone: Only one processor is working. - - data_parallel: Distributes the data across different processors. - - hybrid_parallel: Achieves data parallelism and model parallelism + - ``stand_alone``: Only one processor is working. + - ``data_parallel``: Distributes the data across different processors. + - ``hybrid_parallel``: Achieves data parallelism and model parallelism manually. - - semi_auto_parallel: Achieves data parallelism and model parallelism by + - ``semi_auto_parallel``: Achieves data parallelism and model parallelism by setting parallel strategies. ``` @@ -408,7 +408,7 @@ Supported Platforms: Display effect: - ![image](./resource/list_table.png) + ![image](./images/list_table.png) - By default, the detailed description is displayed in one line. If you need to display it in another line, write it in the form of a list or code-block. - List mode: @@ -451,8 +451,8 @@ Supported Platforms: :func:`mindspore.compression.quant.create_quant_config` ``` -- In the interface description, the variable name or interface name should be wrapped with the symbol \', - and the variable value should be wrapped with the symbol \' or \". +- In the interface description, the variable name or interface name should be wrapped with the symbol \`, + and the variable value should be wrapped with the symbol \``. - Variable name or interface name. @@ -467,7 +467,7 @@ Supported Platforms: - variable value. ```text - If `reduction` is not one of 'none', 'mean', 'sum'. + If `reduction` is not one of ``'none'``, ``'mean'``, ``'sum'``. ``` - The deprecated operator needs to specify the recommended api, and "Deprecated" needs to be added in the Supported Platforms. @@ -517,8 +517,8 @@ class Tensor(Tensor_): Args: input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray): Input data of the tensor. dtype (:class:`mindspore.dtype`): Input data should be None, bool or numeric type defined in `mindspore.dtype`. - The argument is used to define the data type of the output tensor. If it is None, the data type of the - output tensor will be as same as the `input_data`. Default: None. + The argument is used to define the data type of the output tensor. If it is ``None``, the data type of the + output tensor will be as same as the `input_data`. Default: ``None``. Outputs: Tensor, with the same shape as `input_data`. diff --git a/guidelines/docs_comments_guidelines_zh_cn.md b/guidelines/docs_comments_guidelines_zh_cn.md index 70d6e97..f72d2d7 100644 --- a/guidelines/docs_comments_guidelines_zh_cn.md +++ b/guidelines/docs_comments_guidelines_zh_cn.md @@ -42,7 +42,7 @@ Note: Note description. Args: - Arg1 (Type): Description. Default: xxx. + Arg1 (Type): Description. Default: ``xxx`` . Arg2 (Type): Description. - Sub-argument1 or Value1 of Arg2: Description. @@ -103,9 +103,9 @@ Supported Platforms: Args: network (Cell): A training or testing network. - loss_fn (Cell): Objective function, if `loss_fn` is None, the + loss_fn (Cell): Objective function, if `loss_fn` is ``None``, the network should contain the logic of loss and grads calculation, and the logic - of parallel if needed. Default: None. + of parallel if needed. Default: ``None``. Returns: function, original function. @@ -115,17 +115,17 @@ Supported Platforms: ```text Args: - amp_level (str): Option for argument `level` in `mindspore.amp.build_train_network`, level for mixed - precision training. Supports ["O0", "O2", "O3", "auto"]. Default: "O0". - - - O0: Do not change. - - O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale. - - O3: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'. - - auto: Set to level to recommended level in different devices. Set level to "O2" on GPU, set - level to "O3" Ascend. The recommended level is choose by the export experience, cannot + amp_level (str): Option for argument `level` in :func:`mindspore.amp.build_train_network`, level for mixed + precision training. Supports ``"O0"``, ``"O2"``, ``"O3"``, ``"auto"``. Default: ``"O0"``. + + - ``O0``: Do not change. + - ``O2``: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale. + - ``O3``: Cast network to float16, with additional property 'keep_batchnorm_fp32=False'. + - ``auto``: Set to level to recommended level in different devices. Set level to ``"O2"`` on GPU, set + level to ``"O3"`` Ascend. The recommended level is choose by the export experience, cannot always generalize. User should specify the level for special network. - "O2" is recommended on GPU, "O3" is recommended on Ascend. + ``"O2"`` is recommended on GPU, ``"O3"`` is recommended on Ascend. ``` - 空格要求: @@ -136,8 +136,8 @@ Supported Platforms: Args: lr_power (float): Learning rate power controls how the learning rate decreases during training, must be less than or equal to zero. Use fixed learning rate if `lr_power` is zero. - use_locking (bool): If True, the var and accumulation tensors will be protected from being updated. - Default: False. + use_locking (bool): If ``True``, the var and accumulation tensors will be protected from being updated. + Default: ``False``. Raises: TypeError: If `lr`, `l1`, `l2`, `lr_power` or `use_locking` is not a float. @@ -152,14 +152,14 @@ Supported Platforms: ```text Args: - parallel_mode (str): There are five kinds of parallel modes, "stand_alone", "data_parallel", - "hybrid_parallel", "semi_auto_parallel" and "auto_parallel". Default: "stand_alone". + parallel_mode (str): There are five kinds of parallel modes, ``"stand_alone"``, ``"data_parallel"``, + ``"hybrid_parallel"``, ``"semi_auto_parallel"`` and ``"auto_parallel"``. Default: ``"stand_alone"``. - - stand_alone: Only one processor is working. - - data_parallel: Distributes the data across different processors. - - hybrid_parallel: Achieves data parallelism and model parallelism + - ``stand_alone``: Only one processor is working. + - ``data_parallel``: Distributes the data across different processors. + - ``hybrid_parallel``: Achieves data parallelism and model parallelism manually. - - semi_auto_parallel: Achieves data parallelism and model parallelism by + - ``semi_auto_parallel``: Achieves data parallelism and model parallelism by setting parallel strategies. ``` @@ -408,7 +408,7 @@ Supported Platforms: 显示效果: - ![image](./resource/list_table.png) + ![image](./images/list_table.png) - 详细说明默认不换行,如需换行,需以列表或code-block的方式写作。 - 列表方式: @@ -451,7 +451,7 @@ Supported Platforms: :func:`mindspore.compression.quant.create_quant_config` ``` -- 接口描述中,变量名或接口名使用符号\`包裹,变量值使用符号\'或\"包裹。 +- 接口描述中,变量名或接口名使用符号\`包裹,变量值使用符号\`\`包裹。 - 变量名或接口名。 @@ -466,7 +466,7 @@ Supported Platforms: - 变量值。 ```text - If `reduction` is not one of 'none', 'mean', 'sum'. + If `reduction` is not one of ``'none'``, ``'mean'``, ``'sum'``. ``` - 废弃算子需要写明建议使用的接口,以及需要在支持平台中写上Deprecated。 @@ -516,8 +516,8 @@ class Tensor(Tensor_): Args: input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray): Input data of the tensor. dtype (:class:`mindspore.dtype`): Input data should be None, bool or numeric type defined in `mindspore.dtype`. - The argument is used to define the data type of the output tensor. If it is None, the data type of the - output tensor will be as same as the `input_data`. Default: None. + The argument is used to define the data type of the output tensor. If it is ``None``, the data type of the + output tensor will be as same as the `input_data`. Default: ``None``. Outputs: Tensor, with the same shape as `input_data`. @@ -551,19 +551,19 @@ def ms_function(fn=None, obj=None, input_signature=None): This allows the MindSpore runtime to apply optimizations based on graph. Args: - fn (Function): The Python function that will be run as a graph. Default: None. + fn (Function): The Python function that will be run as a graph. Default: ``None``. obj (Object): The Python Object that provides the information for identifying the compiled function. Default: - None. + ``None``. input_signature (MetaTensor): The MetaTensor which describes the input arguments. The MetaTensor specifies the shape and dtype of the Tensor and they will be supplied to this function. If `input_signature` is specified, each input to `fn` must be a `Tensor`. And the input parameters of `fn` cannot accept `**kwargs`. The shape and dtype of actual inputs should keep the same as `input_signature`. Otherwise, - TypeError will be raised. Default: None. + TypeError will be raised. Default: ``None``. Returns: - Function, if `fn` is not None, returns a callable function that will execute the compiled function; If `fn` is - None, returns a decorator and when this decorator invokes with a single `fn` argument, the callable function is - equal to the case when `fn` is not None. + Function, if `fn` is not ``None``, returns a callable function that will execute the compiled function; If `fn` is + ``None``, returns a decorator and when this decorator invokes with a single `fn` argument, the callable function is + equal to the case when `fn` is not ``None``. Examples: >>> def tensor_add(x, y): diff --git a/security/resource/list_table.png b/guidelines/images/list_table.png similarity index 100% rename from security/resource/list_table.png rename to guidelines/images/list_table.png -- Gitee