Tactic Links - Organic Traffic Booster - Home

Path: Home > List > Load (astonzhang.com)

Home | About | List | Rankings | Search | Submit
domainastonzhang.com
summaryThe provided content appears to be a list of research papers and their respective authors or contributors related to large language models, multimodal learning, parameter-efficient fine-tuning, image model adaptation for video understanding, automatic chain-of-thought prompting in large language models, and a comparison of ChatGPT's capabilities.

Notable works include:

1. "Law of the Weakest Link: Cross Capabilities of Large Language Models" (AllM. Zhong et al., ICLR 2025) - This paper explores the concept that large language models have varying capabilities, and their limitations can be seen as the 'weakest link' in a system.

2. "You Only Look at Screens: Multimodal Chain-of-Action Agents" (Z. Zhang et al., ACL 2024) - This work introduces a multimodal agent that can understand and respond to actions depicted in images or videos.

3. "Multimodal Chain-of-Thought Reasoning in Language Models" (Z. Zhang et al., TMLR 2024) - This paper proposes a chain-of-thought reasoning approach for multimodal language models, enabling them to better understand and generate complex responses based on various inputs.

4. "Parameter-Efficient Fine-Tuning Design Spaces" (J. Chen et al., ICLR 2023) - This research focuses on efficient fine-tuning methods for large language models by optimizing design spaces, reducing computational resources needed for adaptation.

5. "AIM: Adapting Image Models for Efficient Video Understanding" (T. Yang et al., ICLR 2023) - The authors present an approach to adapt image models for better video understanding tasks, improving their efficiency and performance.

6. "Automatic Chain of Thought Prompting in Large Language Models" (Z. Zhang et al., ICLR 2023) - This paper introduces a method for automatically generating chain-of-thought prompts to enhance the reasoning capabilities of large language models.

7. "Is ChatGPT a General-Purpose Natural Language Processing Task Solver" (C. Qin et al., EMNLP 2023) - Researchers evaluate ChatGPT's versatility across various natural language processing tasks, providing insights into its strengths and limitations as a general-purpose model.

8. "Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with 1n Parameters" (A. Zhang et al., ICLR 2021, Outstanding Paper Award) - This groundbreaking work introduces a novel parameterization technique using quaternions instead of fully-connected layers, significantly reducing parameters needed for hypercomplex multiplications.
titleAston Zhang
descriptionAston Zhang
keywordslearning, conference, international, language, models, proceedings, representations, paper, award, chen, deep, chain, linguistics, yang, natural, processing, papers
upstreams
downstreams
nslookupA 185.199.108.153, A 185.199.111.153, A 185.199.109.153, A 185.199.110.153
created2025-11-09
updated2025-11-09
summarized2025-11-11

HIGHSPOTS



tacticlinks.com


lhapsus.xyz


whimed.com


bytemux.io


decoupled.ai


shuken.io


greenpeace.org


escrache.org

Copyright © 2025 Tactic Links - All rights reserved
Traffic Boost by Tactic Links
[took: 215 ms]