英文字典中文字典


英文字典中文字典51ZiDian.com



中文字典辞典   英文字典 a   b   c   d   e   f   g   h   i   j   k   l   m   n   o   p   q   r   s   t   u   v   w   x   y   z       







请输入英文单字,中文词皆可:


请选择你想看的字典辞典:
单词字典翻译
35362查看 35362 在百度字典中的解释百度英翻中〔查看〕
35362查看 35362 在Google字典中的解释Google英翻中〔查看〕
35362查看 35362 在Yahoo字典中的解释Yahoo英翻中〔查看〕





安装中文字典英文字典查询工具!


中文字典英文字典工具:
选择颜色:
输入中英文单字

































































英文字典中文字典相关资料:


  • On the Sentence Embeddings from Pre-trained Language Models
    Pre-trained contextual representations like BERT have achieved great success in natural language processing However, the sentence embeddings from the pre-trained language models without fine-tuning have been found to poorly capture semantic meaning of sentences
  • On the Sentence Embeddings from Pre-trained Language Models
    Abstract Pre-trained contextual representations like BERT have achieved great success in natural language processing However, the sentence embeddings from the pre-trained language models without fine-tuning have been found to poorly capture semantic meaning of sentences
  • On the Sentence Embeddings from Pre-trained Language Models
    Abstract Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, Lei Li Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
  • On the Sentence Embeddings from Pre-trained Language Models | Papers . . .
    To address this issue, we propose to transform the anisotropic sentence embedding distribution to a smooth and isotropic Gaussian distribution through normalizing flows that are learned with an unsupervised objective
  • On the Sentence Embeddings from Pre-trained Language Models
    In this paper, we show how universal sentence representations trained using the supervised data of the Stanford Natural Language Inference datasets can consistently outperform unsupervised
  • Extracting Sentence Embeddings from Pretrained Transformer Models
    Abstract Background introduction: Pre-trained transformer models shine in many natural language processing tasks and therefore are expected to bear the representation of the input sentence or text meaning These sentence-level embeddings are also important in retrieval-augmented generation
  • On the Sentence Embeddings from Pre-trained Language Models
    Pre-trained contextual representations like BERT have achieved great success in natural language processing However, the sentence embeddings from the pre-trained language models without fine-tuning have been found to poorly capture semantic meaning of sentences
  • On the Sentence Embeddings from Pre-trained Language Models
    OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status We gratefully acknowledge the support of the OpenReview Sponsors © 2026 OpenReview
  • ACL Anthology - ACL Anthology
    %0 Conference Proceedings %T On the Sentence Embeddings from Pre-trained Language Models %A Li, Bohan %A Zhou, Hao %A He, Junxian %A Wang, Mingxuan %A Yang, Yiming %A Li, Lei %Y Webber, Bonnie %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) %D 2020 %8
  • On the Sentence Embeddings from Pre-Trained Language Models
    A key enabler of deep learning for natural language processing has been the development of word embeddings One reason for this is that deep learning intrinsically involves the use of neural network models and these models only work with numeric inputs





中文字典-英文字典  2005-2009