distillation decoupled knowledge

Query2box Reasoning over Knowledge Graphs in Vector Space using Box Embeddings

[TOC] > [Ren H., Hu W. and Leskovec J. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. ICLR, 2020.](http://arxiv.org/ ......

Incrementer:Transformer for Class-Incremental Semantic Segmentation with Knowledge Distillation Focusing on Old Class论文阅读笔记

## 摘要 目前已有的连续语义分割方法通常基于卷积神经网络,需要添加额外的卷积层来分辨新类别,且在蒸馏特征时没有对属于旧类别/新类别的区域加以区分。为此,作者提出了基于Transformer的网络incrementer,在学习新类别时只需要往decoder中加入对应的token。同时,作者还提出了对 ......

读书笔记: Psychological Power between knowledge and practice; Inverted Totalitarianism;

John Dewey once remarked that equality becomes dangerous when it is widely praised but empty in practice. Perhaps the most crucial element in the stru ......

April 2023-Memory-efficient Reinforcement Learning with Value-based Knowledge Consolidation

本文基于深度q网络算法提出了记忆高效的强化学习算法来缓解这一问题。通过将目标q网络中的知识整合Knowledge Consolidation到当前q网络中,所提算法减少了遗忘并保持了较高的样本效率。 ......

Weakly Supervised Temporal Action Localization via Representative Snippet Knowledge Propagation概述

0.前言 相关资料: arxiv github 论文解读 论文基本信息: 领域:弱监督时序动作定位 发表时间:CVPR2022(2022.3.14) 1.针对的问题 许多现有的方法试图生成伪标签来弥补分类和定位之间的差异,但通常只使用有限的上下文信息,即每个片段内的信息,来生成伪标签。 2.主要贡献 ......

Teachable Reinforcement Learning via Advice Distillation

**发表时间:**2021 (NeurIPS 2021) **文章要点:**这篇文章提出了一种学习policy的监督范式,大概思路就是先结构化advice,然后先学习解释advice,再从advice中学policy。这个advice来自于外部的teacher,相当于一种human-in-the-l ......

【论文阅读笔记】Distiling Causal Effect of Data in Class-Incremental Learning

Author: Hanwang Zhang, Xinting Hu Create_time: April 24, 2022 11:01 AM Edited_by: Huang Yujun Publisher: CVPR 2021 Org: Nanyang Technological Universi ......

11 zkrpChain Towards multi-party privacy-preserving data auditing for consortium blockchains based on zero-knowledge range proofs

![](https://img2023.cnblogs.com/blog/1954056/202304/1954056-20230407170611339-1868056177.png)![](https://img2023.cnblogs.com/blog/1954056/202304/19540... ......

文献阅读——Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study

Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023 ......

文献阅读——The Augmented Image Prior Distilling 1000 Classes by Extrapolating from a Single Image

Y. M. Asano and A. Saeed, ‘THE AUGMENTED IMAGE PRIOR: DISTILLING 1000 CLASSES BY EXTRAPOLATING FROM A SINGLE IMAGE’, 2023. ICLR2023,阿姆斯特丹大学和埃因霍芬理工大学两位 ......

20230402 Zero-Knowledge Proof

https://zhuanlan.zhihu.com/p/144847471 零知识证明想要解决的问题是,让一方向另一方证明他知道某个问题的答案但却不想透露该问题的具体答案。是不是有种贱贱的感觉? https://blog.csdn.net/qq_35739903/article/details/1 ......
Zero-Knowledge Knowledge 20230402 Proof Zero

TIE: A Framework for Embedding-based Incremental Temporal Knowledge Graph Completion 增量时序知识图谱补全论文解读

论文网址:https://dl.acm.org/doi/10.1145/3404835.3462961 Arxiv:https://arxiv.org/abs/2104.08419 论文提出一种用增量学习思想做时序知识图谱补全(Temporal Knowledge Graph Completion, ......

Relational Learning with Gated and Attentive Neighbor Aggregator for Few-Shot Knowledge Graph Completion 小样本知识图谱补全论文解读

小样本知识图补全——关系学习。论文利用三元组的邻域信息,提升模型的关系表示学习,来实现小样本的链接预测。主要应用的思想和模型包括:GAT(图注意力神经网络)、TransH、SLTM、Model-Agnostic Meta-Learning (MAML)。 论文地址:https://arxiv.org ......