Python教程

python学习两大法宝函数——dir()和help()

本文主要是介绍python学习两大法宝函数——dir()和help(),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

参考来源
https://www.bilibili.com/video/BV1hE411t7RN?p=4

以pytorch为例:
在这里插入图片描述
把pytorch看作一个工具箱,工具包中有很多小隔间,如图中的1,2,3,4,打开一个小隔间,里面可能又是一些小隔间。也可能是直接能用的工具,如图中3隔间中的a,b,c。

那怎么看一个工具箱或者隔间里面有什么东!西?
使用dir函数!

import torch
print(dir(torch))   # 打开Pytorch包,发现里面有很多小隔间,有一个叫cuda
print(dir(torch.cuda))  # 打开cuda,发现又有很多小隔间,有一个叫is_available
print(dir(torch.cuda.is_available))   # 打开is_available

打开is_available,发现很多带__下划线开头的东西,这一般是不可修改的内置函数,说明is_available已经可能是个函数了,也就是工具。

Out:

['AVG', 'AggregationType', 'AnyType', 'Argument', 'ArgumentSpec', 'BFloat16Storage', 'BFloat16Tensor', 'BenchmarkConfig', 'BenchmarkExecutionStats', 'Block', 'BoolStorage', 'BoolTensor', 'BoolType', 'BufferDict', 'ByteStorage', 'ByteTensor', 'CONV_BN_FUSION', 'CallStack', 'Capsule', 'CharStorage', 'CharTensor', 'ClassType', 'Code', 'CompilationUnit', 'CompleteArgumentSpec', 'ComplexDoubleStorage', 'ComplexFloatStorage', 'ConcreteModuleType', 'ConcreteModuleTypeBuilder', 'CudaBFloat16StorageBase', 'CudaBoolStorageBase', 'CudaByteStorageBase', 'CudaCharStorageBase', 'CudaComplexDoubleStorageBase', 'CudaComplexFloatStorageBase', 'CudaDoubleStorageBase', 'CudaFloatStorageBase', 'CudaHalfStorageBase', 'CudaIntStorageBase', 'CudaLongStorageBase', 'CudaShortStorageBase', 'DeepCopyMemoTable', 'DeviceObjType', 'DictType', 'DoubleStorage', 'DoubleTensor', 'ErrorReport', 'ExecutionPlan', 'ExtraFilesMap', 'FatalError', 'FileCheck', 'FloatStorage', 'FloatTensor', 'FloatType', 'FunctionSchema', 'Future', 'FutureType', 'Generator', 'Gradient', 'Graph', 'GraphExecutorState', 'HalfStorage', 'HalfStorageBase', 'HalfTensor', 'INSERT_FOLD_PREPACK_OPS', 'IODescriptor', 'IntStorage', 'IntTensor', 'IntType', 'InterfaceType', 'JITException', 'ListType', 'LiteScriptModule', 'LockingLogger', 'LoggerBase', 'LongStorage', 'LongTensor', 'MobileOptimizerType', 'ModuleDict', 'Node', 'NoneType', 'NoopLogger', 'NumberType', 'OptionalType', 'ParameterDict', 'PyObjectType', 'PyTorchFileReader', 'PyTorchFileWriter', 'QInt32Storage', 'QInt32StorageBase', 'QInt8Storage', 'QInt8StorageBase', 'QUInt8Storage', 'REMOVE_DROPOUT', 'RRefType', 'SUM', 'ScriptClass', 'ScriptFunction', 'ScriptMethod', 'ScriptModule', 'ScriptObject', 'Set', 'ShortStorage', 'ShortTensor', 'Size', 'Storage', 'StringType', 'Tensor', 'TensorType', 'ThroughputBenchmark', 'TracingState', 'TupleType', 'Type', 'USE_GLOBAL_DEPS', 'USE_RTLD_GLOBAL_WITH_LIBTORCH', 'Use', 'Value', '_C', '_StorageBase', '_VF', '__all__', '__annotations__', '__builtins__', '__cached__', '__config__', '__doc__', '__file__', '__future__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_adaptive_avg_pool2d', '_addmv_impl_', '_addr', '_addr_', '_amp_non_finite_check_and_unscale_', '_amp_update_scale', '_baddbmm_mkl_', '_batch_norm_impl_index', '_bmm', '_cast_Byte', '_cast_Char', '_cast_Double', '_cast_Float', '_cast_Half', '_cast_Int', '_cast_Long', '_cast_Short', '_cat', '_choose_qparams_per_tensor', '_classes', '_convolution', '_convolution_nogroup', '_copy_from', '_ctc_loss', '_cudnn_ctc_loss', '_cudnn_init_dropout_state', '_cudnn_rnn', '_cudnn_rnn_flatten_weight', '_cufft_clear_plan_cache', '_cufft_get_plan_cache_max_size', '_cufft_get_plan_cache_size', '_cufft_set_plan_cache_max_size', '_cummax_helper', '_cummin_helper', '_debug_has_internal_overlap', '_dim_arange', '_dirichlet_grad', '_embedding_bag', '_empty_affine_quantized', '_empty_per_channel_affine_quantized', '_euclidean_dist', '_fft_with_size', '_fused_dropout', '_has_compatible_shallow_copy_type', '_import_dotted_name', '_index_copy_', '_index_put_impl_', '_is_deterministic', '_jit_internal', '_linalg_utils', '_load_global_deps', '_lobpcg', '_log_softmax', '_log_softmax_backward_data', '_logcumsumexp', '_lowrank', '_lu_solve_helper', '_lu_with_info', '_make_per_channel_quantized_tensor', '_make_per_tensor_quantized_tensor', '_masked_scale', '_mkldnn', '_mkldnn_reshape', '_mkldnn_transpose', '_mkldnn_transpose_', '_mode', '_multinomial_alias_draw', '_multinomial_alias_setup', '_namedtensor_internals', '_nnpack_available', '_nnpack_spatial_convolution', '_ops', '_overrides', '_pack_padded_sequence', '_pad_packed_sequence', '_reshape_from_tensor', '_s_where', '_sample_dirichlet', '_set_deterministic', '_shape_as_tensor', '_six', '_sobol_engine_draw', '_sobol_engine_ff_', '_sobol_engine_initialize_state_', '_sobol_engine_scramble_', '_softmax', '_softmax_backward_data', '_sparse_addmm', '_sparse_log_softmax', '_sparse_log_softmax_backward_data', '_sparse_mm', '_sparse_softmax', '_sparse_softmax_backward_data', '_sparse_sum', '_standard_gamma', '_standard_gamma_grad', '_storage_classes', '_string_classes', '_tensor_classes', '_tensor_str', '_test_serialization_subcmul', '_trilinear', '_unique', '_unique2', '_use_cudnn_ctc_loss', '_use_cudnn_rnn_flatten_weight', '_utils', '_utils_internal', '_weight_norm', '_weight_norm_cuda_interface', 'abs', 'abs_', 'absolute', 'absolute_', 'acos', 'acos_', 'acosh', 'acosh_', 'adaptive_avg_pool1d', 'adaptive_max_pool1d', 'add', 'addbmm', 'addcdiv', 'addcmul', 'addmm', 'addmv', 'addmv_', 'addr', 'affine_grid_generator', 'align_tensors', 'all', 'allclose', 'alpha_dropout', 'alpha_dropout_', 'angle', 'any', 'arange', 'argmax', 'argmin', 'argsort', 'as_strided', 'as_strided_', 'as_tensor', 'asin', 'asin_', 'asinh', 'asinh_', 'atan', 'atan2', 'atan_', 'atanh', 'atanh_', 'autocast_decrement_nesting', 'autocast_increment_nesting', 'autograd', 'avg_pool1d', 'backends', 'baddbmm', 'bartlett_window', 'base_py_dll_path', 'batch_norm', 'batch_norm_backward_elemt', 'batch_norm_backward_reduce', 'batch_norm_elemt', 'batch_norm_gather_stats', 'batch_norm_gather_stats_with_counts', 'batch_norm_stats', 'batch_norm_update_stats', 'bernoulli', 'bfloat16', 'bilinear', 'binary_cross_entropy_with_logits', 'bincount', 'binomial', 'bitwise_and', 'bitwise_not', 'bitwise_or', 'bitwise_xor', 'blackman_window', 'block_diag', 'bmm', 'bool', 'broadcast_tensors', 'bucketize', 'can_cast', 'cartesian_prod', 'cat', 'cdist', 'cdouble', 'ceil', 'ceil_', 'celu', 'celu_', 'cfloat', 'chain_matmul', 'channel_shuffle', 'channels_last', 'channels_last_3d', 'cholesky', 'cholesky_inverse', 'cholesky_solve', 'chunk', 'clamp', 'clamp_', 'clamp_max', 'clamp_max_', 'clamp_min', 'clamp_min_', 'classes', 'clear_autocast_cache', 'clone', 'combinations', 'compiled_with_cxx11_abi', 'complex128', 'complex32', 'complex64', 'conj', 'constant_pad_nd', 'contiguous_format', 'conv1d', 'conv2d', 'conv3d', 'conv_tbc', 'conv_transpose1d', 'conv_transpose2d', 'conv_transpose3d', 'convolution', 'cos', 'cos_', 'cosh', 'cosh_', 'cosine_embedding_loss', 'cosine_similarity', 'cpp', 'cross', 'ctc_loss', 'ctypes', 'cuda', 'cuda_path', 'cuda_version', 'cudnn_affine_grid_generator', 'cudnn_batch_norm', 'cudnn_convolution', 'cudnn_convolution_transpose', 'cudnn_grid_sampler', 'cudnn_is_acceptable', 'cummax', 'cummin', 'cumprod', 'cumsum', 'default_generator', 'deg2rad', 'deg2rad_', 'dequantize', 'det', 'detach', 'detach_', 'device', 'diag', 'diag_embed', 'diagflat', 'diagonal', 'digamma', 'dist', 'distributed', 'distributions', 'div', 'dll', 'dll_path', 'dll_paths', 'dlls', 'dot', 'double', 'dropout', 'dropout_', 'dsmm', 'dtype', 'eig', 'einsum', 'embedding', 'embedding_bag', 'embedding_renorm_', 'empty', 'empty_like', 'empty_meta', 'empty_quantized', 'empty_strided', 'enable_grad', 'eq', 'equal', 'erf', 'erf_', 'erfc', 'erfc_', 'erfinv', 'exp', 'exp_', 'expm1', 'expm1_', 'eye', 'fake_quantize_per_channel_affine', 'fake_quantize_per_tensor_affine', 'fbgemm_linear_fp16_weight', 'fbgemm_linear_fp16_weight_fp32_activation', 'fbgemm_linear_int8_weight', 'fbgemm_linear_int8_weight_fp32_activation', 'fbgemm_linear_quantize_weight', 'fbgemm_pack_gemm_matrix_fp16', 'fbgemm_pack_quantized_matrix', 'feature_alpha_dropout', 'feature_alpha_dropout_', 'feature_dropout', 'feature_dropout_', 'fft', 'fill_', 'finfo', 'flatten', 'flip', 'fliplr', 'flipud', 'float', 'float16', 'float32', 'float64', 'floor', 'floor_', 'floor_divide', 'fmod', 'fork', 'frac', 'frac_', 'frobenius_norm', 'from_file', 'from_numpy', 'full', 'full_like', 'functional', 'futures', 'gather', 'ge', 'geqrf', 'ger', 'get_default_dtype', 'get_device', 'get_file_path', 'get_num_interop_threads', 'get_num_threads', 'get_rng_state', 'glob', 'grid_sampler', 'grid_sampler_2d', 'grid_sampler_3d', 'group_norm', 'gru', 'gru_cell', 'gt', 'half', 'hamming_window', 'hann_window', 'hardshrink', 'has_cuda', 'has_cudnn', 'has_lapack', 'has_mkl', 'has_mkldnn', 'has_openmp', 'hinge_embedding_loss', 'histc', 'hsmm', 'hspmm', 'hub', 'ifft', 'iinfo', 'imag', 'import_ir_module', 'import_ir_module_from_buffer', 'index_add', 'index_copy', 'index_fill', 'index_put', 'index_put_', 'index_select', 'init_num_threads', 'initial_seed', 'instance_norm', 'int', 'int16', 'int32', 'int64', 'int8', 'int_repr', 'inverse', 'irfft', 'is_anomaly_enabled', 'is_autocast_enabled', 'is_complex', 'is_distributed', 'is_floating_point', 'is_grad_enabled', 'is_loaded', 'is_nonzero', 'is_same_size', 'is_signed', 'is_storage', 'is_tensor', 'is_vulkan_available', 'isclose', 'isfinite', 'isinf', 'isnan', 'istft', 'jit', 'kernel32', 'kl_div', 'kthvalue', 'last_error', 'layer_norm', 'layout', 'le', 'legacy_contiguous_format', 'lerp', 'lgamma', 'linspace', 'load', 'lobpcg', 'log', 'log10', 'log10_', 'log1p', 'log1p_', 'log2', 'log2_', 'log_', 'log_softmax', 'logaddexp', 'logaddexp2', 'logcumsumexp', 'logdet', 'logical_and', 'logical_not', 'logical_or', 'logical_xor', 'logspace', 'logsumexp', 'long', 'lstm', 'lstm_cell', 'lstsq', 'lt', 'lu', 'lu_solve', 'lu_unpack', 'manual_seed', 'margin_ranking_loss', 'masked_fill', 'masked_scatter', 'masked_select', 'matmul', 'matrix_power', 'matrix_rank', 'max', 'max_pool1d', 'max_pool1d_with_indices', 'max_pool2d', 'max_pool3d', 'mean', 'median', 'memory_format', 'merge_type_from_type_comment', 'meshgrid', 'min', 'miopen_batch_norm', 'miopen_convolution', 'miopen_convolution_transpose', 'miopen_depthwise_convolution', 'miopen_rnn', 'mkldnn_adaptive_avg_pool2d', 'mkldnn_convolution', 'mkldnn_convolution_backward_weights', 'mkldnn_max_pool2d', 'mm', 'mode', 'mul', 'multinomial', 'multiprocessing', 'mv', 'mvlgamma', 'name', 'narrow', 'native_batch_norm', 'native_group_norm', 'native_layer_norm', 'native_norm', 'ne', 'neg', 'neg_', 'nn', 'no_grad', 'nonzero', 'norm', 'norm_except_dim', 'normal', 'nuclear_norm', 'numel', 'nvtoolsext_dll_path', 'ones', 'ones_like', 'onnx', 'ops', 'optim', 'orgqr', 'ormqr', 'os', 'pairwise_distance', 'parse_ir', 'parse_schema', 'parse_type_comment', 'path_patched', 'pca_lowrank', 'pdist', 'per_channel_affine', 'per_channel_symmetric', 'per_tensor_affine', 'per_tensor_symmetric', 'pfiles_path', 'pinverse', 'pixel_shuffle', 'platform', 'poisson', 'poisson_nll_loss', 'polygamma', 'pow', 'prelu', 'prepare_multiprocessing_environment', 'preserve_format', 'prev_error_mode', 'prod', 'promote_types', 'py_dll_path', 'q_per_channel_axis', 'q_per_channel_scales', 'q_per_channel_zero_points', 'q_scale', 'q_zero_point', 'qint32', 'qint8', 'qr', 'qscheme', 'quantization', 'quantize_per_channel', 'quantize_per_tensor', 'quantized_batch_norm', 'quantized_gru', 'quantized_gru_cell', 'quantized_lstm', 'quantized_lstm_cell', 'quantized_max_pool2d', 'quantized_rnn_relu_cell', 'quantized_rnn_tanh_cell', 'quasirandom', 'quint8', 'rad2deg', 'rad2deg_', 'rand', 'rand_like', 'randint', 'randint_like', 'randn', 'randn_like', 'random', 'randperm', 'range', 'real', 'reciprocal', 'reciprocal_', 'relu', 'relu_', 'remainder', 'renorm', 'repeat_interleave', 'res', 'reshape', 'resize_as_', 'result_type', 'rfft', 'rnn_relu', 'rnn_relu_cell', 'rnn_tanh', 'rnn_tanh_cell', 'roll', 'rot90', 'round', 'round_', 'rrelu', 'rrelu_', 'rsqrt', 'rsqrt_', 'rsub', 'saddmm', 'save', 'scalar_tensor', 'scatter', 'scatter_add', 'searchsorted', 'seed', 'select', 'selu', 'selu_', 'serialization', 'set_anomaly_enabled', 'set_autocast_enabled', 'set_default_dtype', 'set_default_tensor_type', 'set_flush_denormal', 'set_grad_enabled', 'set_num_interop_threads', 'set_num_threads', 'set_printoptions', 'set_rng_state', 'short', 'sigmoid', 'sigmoid_', 'sign', 'sin', 'sin_', 'sinh', 'sinh_', 'slogdet', 'smm', 'softmax', 'solve', 'sort', 'sparse', 'sparse_coo', 'sparse_coo_tensor', 'split', 'split_with_sizes', 'spmm', 'sqrt', 'sqrt_', 'square', 'square_', 'squeeze', 'sspaddmm', 'stack', 'std', 'std_mean', 'stft', 'storage', 'strided', 'sub', 'sum', 'svd', 'svd_lowrank', 'symeig', 'sys', 't', 'take', 'tan', 'tan_', 'tanh', 'tanh_', 'tensor', 'tensordot', 'testing', 'th_dll_path', 'threshold', 'threshold_', 'topk', 'torch', 'trace', 'transpose', 'trapz', 'triangular_solve', 'tril', 'tril_indices', 'triplet_margin_loss', 'triu', 'triu_indices', 'true_divide', 'trunc', 'trunc_', 'typename', 'types', 'uint8', 'unbind', 'unique', 'unique_consecutive', 'unsqueeze', 'utils', 'vander', 'var', 'var_mean', 'version', 'view_as_complex', 'view_as_real', 'wait', 'where', 'with_load_library_flags', 'zero_', 'zeros', 'zeros_like']
['Any', 'BFloat16Storage', 'BFloat16Tensor', 'BoolStorage', 'BoolTensor', 'ByteStorage', 'ByteTensor', 'CharStorage', 'CharTensor', 'ComplexDoubleStorage', 'ComplexFloatStorage', 'CudaError', 'DeferredCudaCallError', 'Device', 'Dict', 'DoubleStorage', 'DoubleTensor', 'Event', 'FloatStorage', 'FloatTensor', 'HalfStorage', 'HalfTensor', 'IntStorage', 'IntTensor', 'List', 'LongStorage', 'LongTensor', 'Optional', 'ShortStorage', 'ShortTensor', 'Stream', 'Tuple', 'Union', '_CudaBase', '_CudaDeviceProperties', '_StorageBase', '__annotations__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_check_capability', '_check_cubins', '_check_driver', '_cudart', '_device', '_device_t', '_dummy_type', '_get_device_index', '_initialization_lock', '_initialized', '_is_in_bad_fork', '_lazy_call', '_lazy_init', '_lazy_new', '_queued_calls', '_sleep', '_tls', '_utils', 'amp', 'caching_allocator_alloc', 'caching_allocator_delete', 'check_error', 'collections', 'comm', 'contextlib', 'cudaStatus', 'cudart', 'current_blas_handle', 'current_device', 'current_stream', 'default_generators', 'default_stream', 'device', 'device_count', 'device_of', 'empty_cache', 'get_arch_list', 'get_device_capability', 'get_device_name', 'get_device_properties', 'get_gencode_flags', 'get_rng_state', 'get_rng_state_all', 'has_half', 'has_magma', 'init', 'initial_seed', 'ipc_collect', 'is_available', 'is_initialized', 'manual_seed', 'manual_seed_all', 'max_memory_allocated', 'max_memory_cached', 'max_memory_reserved', 'memory', 'memory_allocated', 'memory_cached', 'memory_reserved', 'memory_snapshot', 'memory_stats', 'memory_stats_as_nested_dict', 'memory_summary', 'nccl', 'nvtx', 'os', 'profiler', 'raise_from', 'random', 'reset_accumulated_memory_stats', 'reset_max_memory_allocated', 'reset_max_memory_cached', 'reset_peak_memory_stats', 'seed', 'seed_all', 'set_device', 'set_rng_state', 'set_rng_state_all', 'sparse', 'stream', 'streams', 'synchronize', 'threading', 'torch', 'traceback', 'warnings']
['__annotations__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__globals__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__']

那怎么看这个工具是干嘛的呢?
使用help函数!

help(torch.cuda.is_available)	#查看is_available()函数

Out:

Help on function is_available in module torch.cuda:

is_available() -> bool
    Returns a bool indicating if CUDA is currently available.

可以看到torch.cuda.is_available的返回值是bool类型的,这个函数是用来看现在能不能使用cuda工具的。

总结:dir函数看模块组成,help函数看函数介绍

这篇关于python学习两大法宝函数——dir()和help()的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!