Zheqi LvZhejiang UniversityHangzhouChinazheqilv@zju.edu.cn,Wenqiao ZhangZhejiang UniversityHangzhouChinawenqiaozhang@zju.edu.cn,Zhengyu ChenZhejiang UniversityHangzhouChinachenzhengyu@zju.edu.cn,Shengyu ZhangZhejiang UniversityHangzhouChinasy_zhang@zju.edu.cnandKun KuangZhejiang UniversityHangzhouChinakunkuang@zju.edu.cn(2024)Abstract.Modern online platforms are increasingly employing recommendation systems to address information overload and improve user engagement.There is an evolving paradigm in this research field that recommendation network learning occurs both on the cloud and on edges with knowledge transfer in between (i.e., edge-cloud collaboration). Recent works push this field further by enabling edge-specific context-aware adaptivity, where model parameters are updated in real-time based on incoming on-edge data. However, we argue that frequent data exchanges between the cloud and edges often lead to inefficiency and waste of communication/computation resources, as considerable parameter updates might be redundant. To investigate this problem, we introduce Intelligent Edge-Cloud Parameter Request Model(IntellectReq).IntellectReq is designed to operate on edge, evaluating the cost-benefit landscape of parameter requests with minimal computation and communication overhead. We formulate this as a novel learning task, aimed at the detection of out-of-distribution data, thereby fine-tuning adaptive communication strategies. Further, we employ statistical mapping techniques to convert real-time user behavior into a normal distribution, thereby employing multi-sample outputs to quantify the model’s uncertainty and thus its generalization capabilities. Rigorous empirical validation on three widely-adopted benchmarks evaluates our approach, evidencing a marked improvement in the efficiency and generalizability of edge-cloud collaborative and dynamic recommendation systems.Edge-Cloud Collaboration, Distribution Shift, Mis-Recommendation Detection, Out-of-Domain Detection, Sequential Recommendation††journalyear: 2024††copyright: acmlicensed††conference: Proceedings of the ACM Web Conference 2024; May 13–17, 2024; Singapore, Singapore††booktitle: Proceedings of the ACM Web Conference 2024 (WWW ’24), May 13–17, 2024, Singapore, Singapore††doi: 10.1145/3589334.3645316††isbn: 979-8-4007-0171-9/24/05††ccs: Information systemsMobile information processing systems††ccs: Information systemsPersonalization††ccs: Human-centered computingMobile computing1. IntroductionWith the rapid development of e-commerce and social media platforms, recommendation systems(Hidasi etal., 2016; Kang and McAuley, 2018; Zhang etal., 2023a; Lv etal., 2022; Zhang etal., 2024) have become indispensable tools in people’s daily life. They can be recognized as various forms depending on industries, like product suggestions on online e-commerce websites, (e.g., Amazon and Taobao) or playlist generators for video and music services (e.g., YouTube, Netflix, and Spotify). Among them, one of the classical recommendation systems in the industry prefers to trains a universal model with static parameters on a powerful cloud conditioned on rich data collected from different edges, and then perform edge inference for all users, such as e.g., DIN(Zhou etal., 2018), SASRec(Kang and McAuley, 2018), and GRU4Rec(Hidasi etal., 2016). In the first model presented in Figure1, this cloud-based static model allows users to share a centralized model, enabling real-time inference across all edges. However, it does not take advantage of the personalized recommendation patterns specific to each edge due to the shift in data distribution between the cloud and edge. As we all know, the shift in the distribution of test data compared to training data will reduce the performance of the model(Chen and Wang, 2021; Chen etal., 2022; Zhang etal., 2021, 2022b, 2023c; Zhu etal., 2023b, a; Tong etal., 2023; Zhang etal., 2022a, 2023b; Zhang and Lv, 2024).To address this issue, existing solutions can be broadly classified into two categories: (i) On-Edge Learning: It improve personalization by on-edge learning with the second method depicted in Figure1(a), based on the on-edge static model. Techniques such as distillation(Sanh etal., 2019) and fine-tuning(Cai etal., 2020) can mitigate the discrepancy between edge and cloud distributions through re-training at the edge. However, retraining at the edge involves a significant amount of computation, particularly in backpropagation. The sudden drop in real-time performance also reduces its practicality. (ii) Edge-Cloud Collaboration(Yao etal., 2022a; Yan etal., 2022a): It leverages the edge-cloud collaboration to efficiently update the parameters of the edge-model according to on-edge real-time data distribution(Lv etal., 2023b; Yan etal., 2022b). Recent advancements have introduced a technique known as adaptive parameter generation(Yan etal., 2022b; Lv etal., 2023b) (shown as the third method in Figure1(a)), which facilitating model personalization without additional on-edge computational cost. This method specifically utilizes a pre-trained hypernetwork(Ha etal., 2017) to convert the user’s real-time click sequence into adaptive parameters through forward propagation. These parameters then be updated to the edge model, allowing it to better fit real-time data distribution for swift personalization of recommendations. This method, termed the Edge-Cloud Collaborative and Dynamic Recommendation(EC-CDR), offers tailored recommendation models across various on-edge distribution.EC-CDR faces deployment challenges in real-world settings due to two key issues:(i) High Request Frequency. Updating EC-CDR model parameters through edge-cloud communication after a user clicks a new item causes a surge in concurrent cloud requests from multiple edges in industrial settings. This problem worsens in unstable networks, limiting EC-CDR’s efficiency due to communication and network constraints. (ii) Low Communication Revenue. When the latest real-time data is the same as, or closely related to, the distribution used previously to update model parameters, communication from edge to cloud is unnecessary. That is, the moment of distribution shift does not always coincide with the timing of model updates at the edge. Unnecessary communication between cloud and edge can lead to low efficiency in communication resource utilization.To address EC-CDR’s communication issues, we analyzed users’ click classes (viewed as domains) on the edge. As shown in Figure2, by collecting item embedding vectors from user clicks across three datasets and classifying them into 50 domains, we found that users typically engage with only 10 to 15 domains. This repeated behavior indicates a failure of EC-CDR to recognize shifts in data distribution on the edge, resulting in frequent dynamic parameter requests and high communication overhead.Based on the insights discussed earlier, our primary optimization goal is to minimize unnecessary communications, aiming for a highly efficient EC-CDR system. To achieve this, we design IntellectReq for deployment on the edge, tasked with assessing the necessity of requests with minimal resource usage. This strategy significantly boosts efficient communication in EC-CDR. IntellectReq is operationalized through the development of the Mis-Recommendation Detector (MRD) and Distribution Mapper (DM).The MRD is engineered to predict the likelihood of edge recommendation models making incorrect recommendations, termed as Mis-Recommendations. It accomplishes this by learning to map current data and previous data which is used to update the last model to mis-recommendation labels. Moreover, MRD translates these predictions into the potential revenue from updating the edge model, thus maximizing revenue within any communication budget and ensuring the model’s optimal performance. The DM is designed to allow the model to detect potential shifts in data distribution and assess the model’s uncertainty in interpreting real-time data, which in turn, augments the capabilities of the MRD module. It comprises three components: a prior network, a posterior network, and a next-item prediction network, with the last serving as DM’s backbone. During the training phase, data features are extracted through both prior and posterior networks, using label-provided prior information to enhance training efficiency. In the inference stage, the posterior network is utilized for feature extraction. By evaluating the model’s uncertainty in processing real-time data—achieved by mapping this data to a normal distribution—DM significantly improves MRD’s prediction accuracy. The conventional recommendation datasets prove inadequate for these tasks. Therefore, we have restructured these datasets into a new MRD dataset, eliminating the need for extra annotations. This restructuring process provides essential supervisory data for training our MRD and DM models, ensuring their effectiveness in the EC-CDR system.To summarize, our contributions are four-fold:•We are the first to point out and introduce IntellectReq to address the issues of high communication frequency and low communication revenue in EC-CDR, a method that improves edge recommendation models to SOTA performance and achieve personalized updates without retraining.•We designed IntellectReq and developed both a Mis-Recommendation Detector (MRD) and a Distribution Mapper (DM) to instantiate IntellectReq. IntellectReq can quantify changes in the data distribution on the edge, and based on the actual communication budget or cloud computing budget, it can determine which edge models need to be updated.•We construct Mis-Recommendation datasets from existing recommendation datasets, as current datasets are not suitable for training IntellectReq, thereby enabling its training without requiring additional manual annotations.•We evaluate our method with extensive experiments. Experiments demonstrate that IntellectReq can achieve high revenue under any edge-cloud communication budget.2. Related workEdge-cloud Collaboration.Deep learning applications are widely used(Wang etal., 2017; Li etal., 2022b, 2023c, 2023d; Wu etal., 2023a, b; Tang etal., 2024b; Qin etal., 2020), but they are fundamentally resource-intensive and difficult to deploy on the edge(Tang etal., 2024a; Chen etal., 2024, 2023; Huang etal., 2022b, 2023, a; Cao etal., 2023; Li etal., 2023a, 2022a, b), so edge-cloud collaboration(Yao etal., 2022b; Qian etal., 2022) is playing an increasingly important role. Cloud-based and on-edge machine learning are two distinct approaches with different benefits and drawbacks. Edge-cloud collaboration can take advantage of them and make them complement one another. Federated learning, such as FedAVG(McMahan etal., 2017), is one of the most well-known forms of edge-cloud collaboration. Federated learning is also often used for various tasks such as multi-task learning(Mills etal., 2021; Marfoq etal., 2021), etc. But the federated learning method for edge-cloud collaboration is too rigid for many real-world scenarios. (Yao etal., 2022a) designs multiple models with the same functions but different training processes, and a Meta Controller is used to determine which model should be used. EC-CDR, such as DUET(Lv etal., 2023b), draw inspiration from the HyperNetwork concept, ensuring that models on the edge can generalize well to the current data distribution at every moment without the need for any training on the edge. However, high request frequency and low communication revenue significantly reduce their practicality. This paper focuses on addressing these shortcomings of EC-CDR.Sequential Recommendation.Sequential recommendation models the user’s historical behavior sequence.Previous sequential recommendation algorithm such as (Rendle etal., 2010) and (Latifi etal., 2021) are non-deep learning based and uses Markov decision chains to model behavioral sequences. To improve the performance of the model, recent works(Hidasi etal., 2016; Zhou etal., 2018; Kang and McAuley, 2018; Sun etal., 2019; Wu etal., 2019; Chang etal., 2021; Zhang etal., 2023a, 2020; Chen etal., 2021; Lv etal., 2023a, 2022; Su etal., 2023b, a; Ji etal., 2023b, a; Li etal., 2023e, 2024; Lin etal., 2023; XinyuLin and Chua, 2024) propose the sequential recommendation model based on deep learning. Among them, the most well-known sequence recommendation models are as follows: GRU4Rec(Hidasi etal., 2016) uses GRU to model behavior sequences and achieves excellent performance. DIN(Zhou etal., 2018) and SASRec(Kang and McAuley, 2018) algorithms, respectively, introduce attention and transformer into sequential recommendation, which is fast and efficient. These methods are relatively influential in both academia and industry. In practical settings, deploying recommendation models at the edge faces constraints due to limited parameters and complexity, alongside the need for real-time operation which hampers real-time model updates using conventional methods. This impacts the model’s generalization capability across different data distributions. This paper explores methods to lower communication costs for a more efficient EC-CDR paradigm.3. MethodologyWe describe the proposed IntellectReq in this section by presenting each module and introduce the learning strategy of IntellectReq.3.1. Problem FormulationIn EC-CDR, we have access to a set of edges 𝒟={d(i)}i=1𝒩d𝒟superscriptsubscriptsuperscript𝑑𝑖𝑖1subscript𝒩𝑑\mathcal{D}=\{d^{(i)}\}_{i=1}^{\mathcal{N}_{d}}caligraphic_D = { italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, where each edge with its personal i.i.d history samples 𝒮H(i)={xH(i)(j,t)={uH(i)(j),vH(i)(j),sH(i)(j,t)},yH(i)(j)}j=1𝒩H(i)subscript𝒮superscript𝐻𝑖superscriptsubscriptsubscriptsuperscript𝑥𝑗𝑡superscript𝐻𝑖subscriptsuperscript𝑢𝑗superscript𝐻𝑖subscriptsuperscript𝑣𝑗superscript𝐻𝑖subscriptsuperscript𝑠𝑗𝑡superscript𝐻𝑖subscriptsuperscript𝑦𝑗superscript𝐻𝑖𝑗1subscript𝒩superscript𝐻𝑖\mathcal{S}_{H^{(i)}}=\{x^{(j,t)}_{H^{(i)}}=\{u^{(j)}_{H^{(i)}},v^{(j)}_{H^{(i%)}},s^{(j,t)}_{H^{(i)}}\},y^{(j)}_{H^{(i)}}\}_{j=1}^{\mathcal{N}_{H^{(i)}}}caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_x start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_u start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_v start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } , italic_y start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and real-time samples 𝒮R(i)={xR(i)(j,t)={uR(i)(j),vR(i)(j),sR(i)(j,t)}}j=1𝒩R(i)subscript𝒮superscript𝑅𝑖superscriptsubscriptsubscriptsuperscript𝑥𝑗𝑡superscript𝑅𝑖subscriptsuperscript𝑢𝑗superscript𝑅𝑖subscriptsuperscript𝑣𝑗superscript𝑅𝑖subscriptsuperscript𝑠𝑗𝑡superscript𝑅𝑖𝑗1subscript𝒩superscript𝑅𝑖\mathcal{S}_{R^{(i)}}=\{x^{(j,t)}_{R^{(i)}}=\{u^{(j)}_{R^{(i)}},v^{(j)}_{R^{(i%)}},s^{(j,t)}_{R^{(i)}}\}\}_{j=1}^{\mathcal{N}_{R^{(i)}}}caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_x start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_u start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_v start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT in the current session, where 𝒩dsubscript𝒩𝑑\mathcal{N}_{d}caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, 𝒩H(i)subscript𝒩superscript𝐻𝑖\mathcal{N}_{H^{(i)}}caligraphic_N start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT and 𝒩R(i)subscript𝒩superscript𝑅𝑖\mathcal{N}_{R^{(i)}}caligraphic_N start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT represent the number of edges, history data, and real-time data, respectively. u𝑢uitalic_u, v𝑣vitalic_v and s𝑠sitalic_s represent user, item and click sequence composed of items. It should be noted that s(j,t)superscript𝑠𝑗𝑡s^{(j,t)}italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT represents the click sequence at moment t𝑡titalic_t in the j𝑗jitalic_j-th sample.The goal of EC-CDR is to generalize a trained global cloud model ℳg(⋅;Θg)subscriptℳ𝑔⋅subscriptΘ𝑔\mathcal{M}_{g}(\cdot;\Theta_{g})caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ( ⋅ ; roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ) learned from {𝒮H(i)}i=1𝒩dsuperscriptsubscriptsubscript𝒮superscript𝐻𝑖𝑖1subscript𝒩𝑑\{\mathcal{S}_{H^{(i)}}\}_{i=1}^{\mathcal{N}_{d}}{ caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT to each specific local edge model ℳd(i)(⋅;Θd(i))subscriptℳsuperscript𝑑𝑖⋅subscriptΘsuperscript𝑑𝑖\mathcal{M}_{d^{(i)}}(\cdot;\Theta_{d^{(i)}})caligraphic_M start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( ⋅ ; roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) conditioned on real-time samples 𝒮R(i)subscript𝒮superscript𝑅𝑖\mathcal{S}_{R^{(i)}}caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT, where ΘgsubscriptΘ𝑔\Theta_{g}roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and Θd(i)subscriptΘsuperscript𝑑𝑖\Theta_{d^{(i)}}roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT respectively denote the learned parameters for the global cloud model and local edge model.(26)EC-CDR:⏟Mg({SH(i)}=i1Nd;Θg)GlobalCloudModel[Parameters]Data←----→⏟Md(i)(SR(i);Θd(i))LocalEdgeModel.Todeterminewhethertorequestparametersfromthecloud,IntellectRequsesS__MRDtolearnaMis−RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC−CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation2626:IntellectReqControl→⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModel⏟(Mg[Parameters]Data←----→Md(i))-ECCDR.3.2subsection3.23.2§3.23.2IntellectReq3.2IntellectReqFigure3 is the overview of Recommendation model, EC-CDR, and IntellectReq framework which consists of Mis-Recommendation Detector (MRD) and Distribution Mapper (DM) to achieve high revenue under any requested budget.We first introduce the EC-CDR, and then present IntellectReq, which we propose to overcome the frequent and low-revenue drawbacks of EC-CDR requests. IntellectReq achieves high communication revenue under any edge-cloud communication budget in EC-CDR. MRD can determine whether to request parameters from the cloud model Mg or to use the edge recommendation model Md based on real-time data SR(i). DM helps MRD make further judgments by discriminating the uncertainty in the recommendation model's understanding of data semantics.3.2.1subsubsection3.2.13.2.1§3.2.13.2.1The framework of EC-CDR3.2.1The framework of EC-CDRIn EC-CDR, a recommendation model with a static layers and a dynamic layers will be trained for the global cloud model development. The goal of the EC-CDR can thus be formulated as the following optimization problem:(3)Equation33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),where Dce(⋅;Θgb) denotes the cross-entropy between two probability distributions, frec(⋅) denotes the dynamic layers of the recommendation model, Ω(x(j)H(i);Θgb) is the static layers extracting features from x(j)H(i). EC-CDR is decoupled edge-model with a ``static layers'' and ``dynamic layers'' training scheme to achieve better personalization.The primary factor enhancing the on-edge model's generalization to real-time data through EC-CDR is its dynamic layers. Upon completion of training, the static layers' parameters remain static, denoted as Θgb, as determined by Eq.3. Conversely, the dynamic layers' parameters, represented by Θgc, are dynamically generated based on real-time data by the cloud generator.In edge inference, the cloud-based parameter generator uses the real-time click sequence ∈s(j,t)R(i)SR(i) to generate the parameters,(4)Equation44=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,where Eshare(⋅) represents the shared encoder. L(n)layer(⋅) is a linear layer used to adjust e(j,t)R(i) which is the output of Eshare(⋅) to the nth dynamic layer features. e(j,t)R(i) means embedding vector generated by the click sequence at the moment t.The cloud generator model treats the parameters of a fully-connected layer as a matrix ∈K(n)R×NinNout, where Nin and Nout represent the number of input neurons and output neurons of the nth fully-connected layers, respectively.Then the cloud generator model g(⋅) converts the real-time click sequence s(j,t)R(i) into dynamic layers parameters ^Θgc by =K(n)R(i)g(n)(e(n)R(i)). Since the following content no longer needs the superscript (n), we simplify g(⋅) to =g(⋅)L(n)layer(Eshared(⋅)). Then, the edge recommendation model updates the parameters and makes inference as follows,(5)Equation55=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).Figure 4Figure44Figure 44Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.Figure 4Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.In cloud training, all layers of the cloud generator model are optimized together with the static layers of the primary model that are conditioned on the global history data=SH(i){x(j)H(i),y(j)H(i)}=j1NH(i), instead of optimizing the static layers of the primary model first and then optimizing the cloud generator model.The cloud generator model loss functionis defined as follows:(6)Equation66EC-CDR could improve the generalization ability of the edge recommendation model.However, EC-CDR could not be easily deployed in a real-world environment due to the high request frequency and low communication revenue. Under the EC-CDR framework, the moment t in Eq.5 is equal to the current moment T, which means that the edge and the cloud communicate at every moment.In fact, however, a lot of communication is unnecessary because ^Θgc generated by the sequence earlier may work well enough.To alleviate this issue, we propose MRD and DM to solve the problem when the edge recommendation model should update parameters.3.2.2subsubsection3.2.23.2.2§3.2.23.2.2Mis-Recommendation Detector3.2.2Mis-Recommendation DetectorThe training procedure of MRD can be divided into two stages.The goal of the first stage is to construct a MRD dataset SC based on the user's historical data without any additional annotation to train the MRD.The cloud model Mg and the edge model Md are trained in the same way as the training procedure of EC-CDR.(7)Equation77Here, we set t′≤t=T. That is, when generating model parameters, we use the click sequence s(j,t′)R(i) at the previous moment t′, but this model is used to predict the current data. Then we can get c(j,t,t′) that means whether the sample be correctly predicted based on the prediction ^y(j,t,t′)R(i) and the ground-truth y(j,t)R(i).(8)Equation88=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,(9)Equation99LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then we construct the new mis-recommendation training dataset as follows:=SMRD(i){s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T.Then, a dynamic layers fMRD(⋅) can be trained on SMRD(i) according to the Eq.9, where =tT and the loss function l(⋅) is cross entropy.3.2.3subsubsection3.2.33.2.3§3.2.33.2.3Distribution Mapper3.2.3Distribution MapperAlthough the MRD could determine when to update edge parameters, it is insufficient to simply map a click sequence to a certain representation in a high-dimensional space due to ubiquitous noises in click sequences. So we design the DM as Figure4 make it possible to directly perceive the data distribution shift and determine the uncertainty in the recommendation model's understanding of the semantics of the data.Inspired by Conditional-VAE, we map click sequences to normal distributions. Different from the MRD, the DM module consider a variable u(j,t) to denote the uncertainty in Eq.9 as:(10)Equation1010LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The uncertainty variable u(j,t) shows the recommendation model's understanding of the semantics of the data. DM focuses on how to learn such uncertainty variable u(j,t).Distribution Mapper consists of three components as shown in the figure in Appendix, namely the Prior Network P(⋅) (PRN), the Posterior Network Q(⋅) (PON), and the Next-item Prediction Network f(⋅) (NPN) that includes the static layers Ω(⋅) and dynamic layers fNPN(⋅). Note that Ω(⋅) here is the same as Ω(⋅) in section3.2.1 and 3.2.2, so there is almost no additional resource consumption. We will first introduce the three components separately, and then introduce the training procedure and inference procedure.Prior Network.The Prior Network with weights Θprior and Θ′prior maps the representation of a click sequence s(j,t) to a prior probability distribution. We set this prior probability distribution as a normal distribution with mean μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN and variance σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior Network.The Posterior Network Ωpost with weights Θpost and Θ′post can enhance the training of the Prior Network by introducing posterior information. It maps the representation concatenated by the representation of the next-item r(j,t) and of the click sequence s(j,t) to a normal distribution.we define the posterior probability distribution as a normal distribution with mean μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN and variance σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item Prediction Network.The Next-item Prediction Network with weights Θc predicts the embedding of the next item ^r(j,t) to be clicked based on the user's click sequence s(j,t) as follows,(13)Equation1313=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training Procedure.In the training procedure, two losses need to be constructed, one is recommendation prediction loss Lrec and the other is distribution difference loss Ldist.Like the way that most recommendation models are trained, Lrec uses the binary cross-entropy loss function l(⋅) to penalize the difference between ^y(j,t) and y(j,t). The difference is that here NPN uses the feature z sampled from the prior distribution Q to replace e in formula 5In addition, Ldist penalizes the difference between the posterior distribution Q and the prior distribution P with the help of the Kullback-Leibler divergence.Ldist "pulls" the posterior and prior distributions towards each other. The formulas for Lrec and Ldist are as follows,(14)Equation1414=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],(15)Equation1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, we optimize DM according to,(16)Equation1616=L(y(j,t),s(j,t))+Lrec⋅βLdist.=L(y(j,t),s(j,t))+Lrec⋅βLdist.During training, the weights are randomly initialized.Inference Procedure. In the inference procedure, the posterior network will be removed from DM because there is no posterior information during the inference procedure. Uncertainty variable u(j,t) is calculated by the multi-sampling outputs as follows:(17)Equation1717=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),where n denotes the sampling times. Specifically, we consider the dimension of ^r(j,t) is ×N1, ^ri(j,t),(k) as the k-th value of the ^ri(j,t) vector, and calculate the variance as follows:(18)Equation1818var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).3.2.4subsubsection3.2.43.2.4§3.2.43.2.4On-edge Model Update3.2.4On-edge Model UpdateMis-Recommendation Score(MRS) is a variable calculated based on the output of MRD and DM, which directly affects whether the model needs to be updated.(19)Equation1919=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation2020=Update1(≤MRSThreshold)=Update1(≤MRSThreshold)In the equation above, 1(⋅) is the indicator function.To get the threshold, we need to collect user data for a period of time, then get the MRS values corresponding to these data on the cloud and sort them, and then set the threshold according to the load of the cloud server. For example, if the load of the cloud server needs to be reduced by 90%, that is, when the load is only 10% of the previous value, only the minimum 10% position value needs to be sent to each edge as the threshold. During inference, each edge determines whether it needs to update the edge model based on equation19 and 20, that is, whether it needs to request new parameters.4section44§44Experiments4ExperimentsWe conducted extensive experiments to evaluate the effectiveness and generalizability of the proposedIntellectReq. We put part of the experimental setup, results and analysis in the Appendix.4.1subsection4.14.1§4.14.1Experimental Setup.4.1Experimental Setup.Datasets. We evaluate on Amazon CDs(CDs), Amazon Electronic(Electronic), Douban Book(Book),three widely used public benchmarks in the recommendation tasks.Evaluation MetricsIn the experiments, we use the widely adopted AUC1footnote11footnote 1Note 0.1% absolute AUC gain is regarded as significant for the CTR task(Yan etal., 2022b; Lv etal., 2023b; Kang and McAuley, 2018; Zhou etal., 2018), UAUC1, HitRate and NDCG as the metrics.Baselines.To verify the applicability, the following representative sequential modeling approaches are implemented and compared with the counterparts combined with the proposed method.DUET(Lv etal., 2023b) and APG(Yan etal., 2022b) are SOTA of EC-CDR, which generate parameters through the edge-cloud collaboration for different tasks. With the cloud generator model, the on-edge model could generalize well to the current data distribution in each session without training on the edge. GRU4Rec(Hidasi etal., 2016), DIN(Zhou etal., 2018), and SASRec(Kang and McAuley, 2018) are three of the most widely used sequential recommendation methods in the academia and industry, which respectively introduce GRU, Attention, and Self-Attention into the recommendation system. LOF(Breunig etal., 2000) and OC-SVM(Tax, 2002) estimate the density of a given point via the ratio of the local reachability of its neighbors and itself. They can be used to detect changes in the distribution of click sequences. For the IntellectReq, we consider SASRec as edge-model unless otherwise stated, but note that IntellectReq broadly applies to lots of sequential recommendation model such as DIN, GRU4Rec, etc.Evaluation Metrics.We use the widely adopted AUC, HitRate, and NDCG as the metrics to evaluate model performance.4.2subsection4.24.2§4.24.2Experimental Results.4.2Experimental Results.4.2.1subsubsection4.2.14.2.1§4.2.14.2.1Quantitative Results.4.2.1Quantitative Results.Figure 5Figure55Figure 55Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 5Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 6Figure66Figure 66Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 6Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 7Figure77Figure 77Performance w.r.t. Request Frequency based on on-edge static model.Figure 7Performance w.r.t. Request Frequency based on on-edge static model.Figure5, 6, and 7 summarize the quantitative results of our framework and other methods on CDs and Electronic datasets.The experiments are based on state-of-the-art EC-CDR frameworks such as DUET and APG. As shown in Figure5-6, we combine the parameter generation framework with three sequential recommendation models, DIN, GRU4Rec, SASRec. We evaluate these methods with AUC and UAUC metrics on CDs and Book datasets.We have the following findings:(1) If all edge-model updated at -t1 moment, the DUET framework (DUET) and the APG framework (APG) can be viewed as the upper bound of performance for all methods since DUET and APG are evaluated with fixed 100% request frequency and other methods are evaluated with increasing frequency. If all edge-model are the same as the cloud pretrained model, IntellectReq can even beat DUET, which indicates that in EC-CDR, not all edges need to be updated at every moment. In fact, model parameters generated by user data at some moments can be detrimental to performance.Note that directly comparing the other methods with DUET and APG is not fair as DUET and APG use the fixed 100% request frequency, which could not be deployed in lower request frequency.(2) The random request method (DUET (Random), APG (Random)) works well with any request budget. However, it does not give the optimal request scheme for any request budget in most cases (such as Row.1). The correlation between its performance and Request Frequency tends to be linear.The performances of random request methods are unstable and unpredictable, where these methods outperform other methods in a few cases.(3) LOF (DUET (LOF), APG (LOF)) and OC-SVM (DUET (OC-SVM), APG (OC-SVM)) are two methods that could be used as simple baselines to make the optimal request scheme under a special and specific request budget.However, they have two weaknesses. One is that they consume a lot of resources and thus significantly reduce the calculation speed. The other is they can only work under a specific request budget instead of an arbitrary request budget. For example, in the first line, the Request Frequency of OC-SVM can only be(4) In most cases, our IntellectReq can make the optimal request scheme under any request budget.4.2.2subsubsection4.2.24.2.2§4.2.24.2.2Mis-recommendation score and profit.4.2.2Mis-recommendation score and profit.Figure 8Figure88Figure 88Mis-Recommendation Score and Revenue.Figure 8Mis-Recommendation Score and Revenue.To further study the effectiveness of MDR, we visualize the request timing and revenue in Figure8.As shown in Figure8, we analyze the relationship between request and revenue.Every 100 users were assigned to one of 15 groups, which were selected at random. The Figure is divided into three parts, with the first part used to assess the request and the second and third parts used to assess the benefit.The metric used here is Mis-Recommendation Score (MRS) to evaluate the request revenue. MRS is a metric to measure whether a recommendation will be made in error.In other words, it can be viewed as an evaluation of the model's generalization ability.The probabilities of a mis-recommendation and requesting model parameters are higher and the score is lower.•item1st itemIntellectReq predicts the MRS based on the uncertainty and the click sequences at the moment t and -t1.•item2nd itemDUET (Random) randomly selects edges to request the cloud model to update the parameters of the edges. At this point, MRS can be considered as an arbitrary constant. We take the average value of IntellectReq's MRS as the MRS value.•item3rd itemDUET (w. Request) represents all edge-model be updated at the moment t.•item4th itemDUET (w/o. Request) represents no edge-model be updated at moment -t1 in Figure5 and 6, represents no edge-model be updated at moment 0 in Figure7.•item5th itemRequest Revenue represents the revenue, that is, DUET (w. Request) curve minus DUET (w/o Request).From Figure8, we have the following observations:(1) The trends of MRS and DUET Revenue are typically in the opposite direction, which means that when the MRS value is low, IntellectReq tends to believe that the edge's model cannot generalize well to the current data distribution. Then, the IntellectReq uses the most recent real-time data to request model parameters. As a result, the revenue at this time is frequently positive and relatively high. When the MRS value is high, IntellectReq tends to continue using the model that was updated at the previous moment -t1 instead of t because it believes that the model on the edge can generalize well to the current data distribution. The revenue is frequently low and negative if the model parameters are requested at this point.(2) Since the MRS of DUET (Random) is constant, it cannot predict the revenue of each request. The performance curve changes randomly because of the irregular arrangement order of groups.4.2.3subsubsection4.2.34.2.3§4.2.34.2.3Ablation Study.4.2.3Ablation Study.Figure 9Figure99Figure 99Ablation study on model architecture.Figure 9Ablation study on model architecture.We conducted an ablation study to show the effectiveness of different components in IntellectReq. The results are shown in Figure9.We use w/o. and w. to denote without and with, respectively. From the table, we have the following findings:•item1st itemIntellectReq means both DM and MRD are used.•item2nd item(w/o. DM) means MRD is used but DM is not used.•item3rd item(w/o. MRD) means DM is used but MRD is not used.From the figure and table, we have the following observations:(1) Generally, IntellectReq achieves the best performance with different evaluation metrics in most cases, demonstrating the effectiveness of IntellectReq.(2) When the request frequency is small, the difference between IntellectReq and IntellectReq (w/o. DM) is not immediately apparent, as shown in Fig.9(d). The difference becomes more noticeable when the Request Frequency increases within a certain range. In brief, the difference exhibits the traits of first getting smaller, then larger, and finally smaller.4.2.4subsubsection4.2.44.2.4§4.2.44.2.4Time and Space Cost.4.2.4Time and Space Cost.Most edges have limited storage space, so the on-edge model must be small and sufficient.The edge's computing power is rather limited, and the completion of the recommendation task on the edge requires lots of real-time processing, so the model deployed on the edge must be both simple and fast. Therefore, we analyze whether these methods are controllable and highly profitable based on the DUET framework, and additional time and space resource consumption under this framework is shown in Table1.Table 1Table11Table 11Extra Time and Space Cost on CDs dataset.Table 1Extra Time and Space Cost on CDs dataset.MethodControllableProfitableTime CostSpace Cost (Param.)LOF✗✓/225s11.3ms≈0OC-SVM✗✓/160s9.7ms≈0Random✓✗/0s0.8ms≈0IntellectReq✓✓/11s7.9ms≈5.06kIn the time consumption column, signal ``/'' separates the time consumption of cloud preprocessing and edge inference. Cloud preprocessing means that the cloud server first calculates the MRS value based on recent user data and then determines the threshold based on the communication budget of the cloud server and sends it to the edge. Edge inference refers to the MRS calculated when the click sequence on the edge is updated. The experimental results show that: 1) In terms of time consumption, both cloud preprocessing and edge inference are the fastest for random requests, followed by our IntellectReq. LOF and OC-SVM are the slowest. 2) In terms of space consumption, random, LOF, and OC-SVM can all be regarded as requiring no additional space consumption. In contrast, our method requires the additional deployment of 5.06k parameters on the edge. 3) Random and our IntellectReq can be realized in terms of controllability. It means that edge-cloud communication can be realized under the condition of an arbitrary communication budget, while LOF and OC-SVM cannot. 4) In terms of high yield, LOF, OC-SVM, and our IntellectReq can all be achieved, but random requests cannot.In general, our IntellectReq only requires minimal time consumption (does not affect real-time performance) and space consumption (easy to deploy for smart edges) and can take into account controllability and high profitability.5section55§55Conclusion5ConclusionIn our paper, we argue that under the EC-CDR framework, most communications requesting new parameters for the cloud-based recommendation system are unnecessary due to stable on-edge data distributions. We introduced IntellectReq, a low-resource solution for calculating request value and ensuring adaptive, high-revenue edge-cloud communication. IntellectReq employs a novel edge intelligence task to identify out-of-domain data and uses real-time user behavior mapping to a normal distribution, alongside multi-sampling outputs, to assess the edge model's adaptability to user actions. Our extensive tests across three public benchmarks confirm IntellectReq's efficiency and broad applicability, promoting a more effective edge-cloud collaborative recommendation approach.ACKNOWLEDGMENTThis work was supported by National Key R&D Program of China (No. 2022ZD0119100), Scientific Research Fund of Zhejiang Provincial Education Department (Y202353679), National Natural Science Foundation of China (No. 62376243, 62037001, U20A20387), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Project by Shanghai AI Laboratory (P22KS00111) and Program of Zhejiang Province Science and Technology (2022C01044)References1(1) 22000Breunig etal.Breunig, Kriegel, Ng, and SanderBreunig etal. (2000)ref:lofMarkusM Breunig, Hans-Peter Kriegel, RaymondT Ng, and Jörg Sander. 2000.LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data. 93–104.32020Cai etal.Cai, Gan, Zhu, and HanCai etal. (2020)ref:finetuningHan Cai, Chuang Gan, Ligeng Zhu, and Song Han. 2020.Tinytl: Reduce activations, not trainable parameters for efficient on-device learning.(2020).42023Cao etal.Cao, Zheng, Hassanzadeh, Lamba, Liu, and LiuCao etal. (2023)cao2023_10.1145/3604237.3626868Defu Cao, Yixiang Zheng, Parisa Hassanzadeh, Simran Lamba, Xiaomo Liu, and Yan Liu. 2023.Large Scale Financial Time Series Forecasting with Multi-faceted Model. In Proceedings of the Fourth ACM International Conference on AI in Finance (<conf-loc>, <city>Brooklyn</city>, <state>NY</state>, <country>USA</country>, </conf-loc>) (ICAIF '23). Association for Computing Machinery, New York, NY, USA, 472–480.https://doi.org/10.1145/3604237.362686852021Chang etal.Chang, Gao, Zheng, Hui, Niu, Song, Jin, and LiChang etal. (2021)ref:surgeJianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021.Sequential recommendation with graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 378–387.62021Chen and WangChen and WangChen and Wang (2021)chen2021multiZhengyu Chen and Donglin Wang. 2021.Multi-Initialization Meta-Learning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1390–1394.72022Chen etal.Chen, Xiao, and KuangChen etal. (2022)chen2022baZhengyu Chen, Teng Xiao, and Kun Kuang. 2022.BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 3012–3024.82023Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2023)chen2023learning_arxivZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2023.Learning to Reweight for Graph Neural Network.arXiv preprint arXiv:2312.12475 (2023).92024Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2024)chen2023learningZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2024.Learning to Reweight for Generalizable Graph Neural Network.Proceedings of the AAAI conference on artificial intelligence (2024).102021Chen etal.Chen, Xu, and WangChen etal. (2021)chen2021deepZhengyu Chen, Ziqing Xu, and Donglin Wang. 2021.Deep transfer tensor decomposition with orthogonal constraint for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol.35. 4010–4018.112017Ha etal.Ha, Dai, and LeHa etal. (2017)ref:hypernetwork_pioneering1David Ha, Andrew Dai, and QuocV Le. 2017.Hypernetworks.(2017).122016Hidasi etal.Hidasi, Karatzoglou, Baltrunas, and TikkHidasi etal. (2016)ref:gru4recBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016.Session-based recommendations with recurrent neural networks.International Conference on Learning Representations 2016 (2016).132023Huang etal.Huang, Huang, Yang, Ren, Liu, Li, Ye, Liu, Yin, and ZhaoHuang etal. (2023)huang2023makeRongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023.Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.arXiv preprint arXiv:2301.12661 (2023).142022aHuang etal.Huang, Lam, Wang, Su, Yu, Ren, and ZhaoHuang etal. (2022a)DBLP:conf/ijcai/HuangL0S00Z22Rongjie Huang, Max W.Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022a.FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. In IJCAI. ijcai.org, 4157–4163.152022bHuang etal.Huang, Ren, Liu, Cui, and ZhaoHuang etal. (2022b)huang2022generspeechRongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2022b.Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech.Advances in Neural Information Processing Systems 35 (2022), 10970–10983.162023aJi etal.Ji, Liang, Liao, Fei, and FengJi etal. (2023a)ji2023partialWei Ji, Renjie Liang, Lizi Liao, Hao Fei, and Fuli Feng. 2023a.Partial Annotation-based Video Moment Retrieval via Iterative Learning. In Proceedings of the 31th ACM international conference on Multimedia.172023bJi etal.Ji, Liu, Zhang, Wei, and WangJi etal. (2023b)ji2023onlineWei Ji, Xiangyan Liu, An Zhang, Yinwei Wei, and Xiang Wang. 2023b.Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation. In Proceedings of the 31th ACM international conference on Multimedia.182018Kang and McAuleyKang and McAuleyKang and McAuley (2018)ref:sasrecWang-Cheng Kang and Julian McAuley. 2018.Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 197–206.192021Latifi etal.Latifi, Mauro, and JannachLatifi etal. (2021)latifi2021sessionSara Latifi, Noemi Mauro, and Dietmar Jannach. 2021.Session-aware recommendation: A surprising quest for the state-of-the-art.Information Sciences 573 (2021), 291–315.202023eLi etal.Li, Xiao, Zheng, Wu, and CuiLi etal. (2023e)li2023propensityHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, and Peng Cui. 2023e.Propensity matters: Measuring and enhancing balancing for recommendation. In International Conference on Machine Learning. PMLR, 20182–20194.212024Li etal.Li, Xiao, Zheng, Wu, Geng, Chen, and CuiLi etal. (2024)li2024kernelHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, Zhi Geng, Xu Chen, and Peng Cui. 2024.Debiased Collaborative Filtering with Kernel-based Causal Balancing. In International Conference on Learning Representations.222022aLi etal.Li, He, Wei, Qian, Zhu, Xie, Zhuang, Tian, and TangLi etal. (2022a)li2022fineJuncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie, Yueting Zhuang, Qi Tian, and Siliang Tang. 2022a.Fine-grained semantically aligned vision-language pre-training.Advances in neural information processing systems 35 (2022), 7290–7303.232023aLi etal.Li, Pan, Ge, Gao, Zhang, Ji, Zhang, Chua, Tang, and ZhuangLi etal. (2023a)li2023finetuningJuncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, and Yueting Zhuang. 2023a.Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions.arXiv preprint arXiv:2308.04152 (2023).242023bLi etal.Li, Wang, Qin, Ji, and LiangLi etal. (2023b)lili_10.1145/3581783.3611847Li Li, Chenwei Wang, You Qin, Wei Ji, and Renjie Liang. 2023b.Biased-Predicate Annotation Identification via Unbiased Visual Predicate Representation. In Proceedings of the 31st ACM International Conference on Multimedia (<conf-loc>, <city>Ottawa ON</city>, <country>Canada</country>, </conf-loc>) (MM '23). Association for Computing Machinery, New York, NY, USA, 4410–4420.https://doi.org/10.1145/3581783.3611847252023dLi etal.Li, Wang, Zhang, Miao, Zhao, Zhang, Ji, and WuLi etal. (2023d)li2023winnerMengze Li, Han Wang, Wenqiao Zhang, Jiaxu Miao, Zhou Zhao, Shengyu Zhang, Wei Ji, and Fei Wu. 2023d.Winner: Weakly-supervised hierarchical decomposition and alignment for spatio-temporal video grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23090–23099.262023cLi etal.Li, Wang, Xu, Han, Zhang, Zhao, Miao, Zhang, Pu, and WuLi etal. (2023c)li2023multiMengze Li, Tianbao Wang, Jiahe Xu, Kairong Han, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Shiliang Pu, and Fei Wu. 2023c.Multi-modal Action Chain Abductive Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 4617–4628.272022bLi etal.Li, Wang, Zhang, Zhang, Zhao, Miao, Zhang, Tan, Wang, Wang, etal.Li etal. (2022b)li2022endMengze Li, Tianbao Wang, Haoyu Zhang, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Wenming Tan, Jin Wang, Peng Wang, etal. 2022b.End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 8707–8717.282023Lin etal.Lin, Xu, Wang, Zhang, and FengLin etal. (2023)lin2023mitigatingXin-Yu Lin, Yi-Yan Xu, Wen-Jie Wang, Yang Zhang, and Fu-Li Feng. 2023.Mitigating Spurious Correlations for Self-supervised Recommendation.Machine Intelligence Research 20, 2 (2023), 263–275.292022Lv etal.Lv, Wang, Zhang, Kuang, Yang, and WuLv etal. (2022)lv2022personalizingZheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022.Personalizing Intervened Network for Long-tailed Sequential User Behavior Modeling.arXiv preprint arXiv:2208.09130 (2022).302023aLv etal.Lv, Wang, Zhang, Zhang, Kuang, and WuLv etal. (2023a)lv2023parametersZheqi Lv, Feng Wang, Shengyu Zhang, Wenqiao Zhang, Kun Kuang, and Fei Wu. 2023a.Parameters Efficient Fine-Tuning for Long-Tailed Sequential Recommendation. In CAAI International Conference on Artificial Intelligence. Springer, 442–459.312023bLv etal.Lv, Zhang, Zhang, Kuang, Wang, Wang, Chen, Shen, Yang, Ooi, and WuLv etal. (2023b)ref:duetZheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, BengChin Ooi, and Fei Wu. 2023b.DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization. In Proceedings of the ACM Web Conference 2023.322021Marfoq etal.Marfoq, Neglia, Bellet, Kameni, and VidalMarfoq etal. (2021)ref:federated_multi_task2Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. 2021.Federated multi-task learning under a mixture of distributions.Advances in Neural Information Processing Systems 34 (2021), 15434–15447.332017McMahan etal.McMahan, Moore, Ramage, Hampson, and yArcasMcMahan etal. (2017)ref:federated_fedavgBrendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and BlaiseAguera y Arcas. 2017.Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.342021Mills etal.Mills, Hu, and MinMills etal. (2021)ref:federated_multi_taskJed Mills, Jia Hu, and Geyong Min. 2021.Multi-task federated learning for personalised deep neural networks in edge computing.IEEE Transactions on Parallel and Distributed Systems 33, 3 (2021), 630–641.352022Qian etal.Qian, Xu, Lv, Zhang, Jiang, Liu, Zeng, Chua, and WuQian etal. (2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022Xufeng Qian, Yue Xu, Fuyu Lv, Shengyu Zhang, Ziwen Jiang, Qingwen Liu, Xiaoyi Zeng, Tat-Seng Chua, and Fei Wu. 2022.Intelligent Request Strategy Design in Recommender System. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 3772–3782.362020Qin etal.Qin, Lv, Wang, Hu, and WuQin etal. (2020)qin2020healthFang-Yu Qin, Zhe-Qi Lv, Dan-Ni Wang, Bo Hu, and Chao Wu. 2020.Health status prediction for the elderly based on machine learning.Archives of gerontology and geriatrics 90 (2020), 104121.372010Rendle etal.Rendle, Freudenthaler, and Schmidt-ThiemeRendle etal. (2010)ref:fpmcSteffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010.Factorizing personalized Markov chains for next-basket recommendation.the web conference (2010).382019Sanh etal.Sanh, Debut, Chaumond, and WolfSanh etal. (2019)ref:disitllVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019.DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.arXiv preprint arXiv:1910.01108 (2019).392023aSu etal.Su, Chen, Lin, Li, Liu, and ZhengSu etal. (2023a)su2023personalizedJiajie Su, Chaochao Chen, Zibin Lin, Xi Li, Weiming Liu, and Xiaolin Zheng. 2023a.Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation. In Proceedings of the 31st ACM International Conference on Multimedia. 6321–6331.402023bSu etal.Su, Chen, Liu, Wu, Zheng, and LyuSu etal. (2023b)su2023enhancingJiajie Su, Chaochao Chen, Weiming Liu, Fei Wu, Xiaolin Zheng, and Haoming Lyu. 2023b.Enhancing Hierarchy-Aware Graph Networks with Deep Dual Clustering for Session-based Recommendation. In Proceedings of the ACM Web Conference 2023. 165–176.412019Sun etal.Sun, Liu, Wu, Pei, Lin, Ou, and JiangSun etal. (2019)ref:bert4recFei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019.BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.422024aTang etal.Tang, Lv, Zhang, Wu, and KuangTang etal. (2024a)tang2024modelgptZihao Tang, Zheqi Lv, Shengyu Zhang, Fei Wu, and Kun Kuang. 2024a.ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation.arXiv preprint arXiv:2402.12408 (2024).432024bTang etal.Tang, Lv, Zhang, Zhou, Duan, Kuang, and WuTang etal. (2024b)tang2024oodkdZihao Tang, Zheqi Lv, Shengyu Zhang, Yifan Zhou, Xinyu Duan, Kun Kuang, and Fei Wu. 2024b.AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation. In 12th International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net.https://openreview.net/forum?id=fcqWJ8JgMR442002TaxTaxTax (2002)ref:ocsvmDavid MartinusJohannes Tax. 2002.One-class classification: Concept learning in the absence of counter-examples.(2002).452023Tong etal.Tong, Yuan, Zhang, Zhu, Zhang, Wu, and KuangTong etal. (2023)DBLP:conf/kdd/TongYZZZWK23Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, and Kun Kuang. 2023.Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization. In KDD. ACM, 2189–2200.462017Wang etal.Wang, Cui, Wang, Pei, Zhu, and YangWang etal. (2017)wang2017communityXiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017.Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, Vol.31.472019Wu etal.Wu, Tang, Zhu, Wang, Xie, and TanWu etal. (2019)ref:srgnnShu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019.Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol.33. 346–353.482023aWu etal.Wu, Lu, Zhang, Jatowt, Feng, Sun, Wu, and KuangWu etal. (2023a)wu2023focusYiquan Wu, Weiming Lu, Yating Zhang, Adam Jatowt, Jun Feng, Changlong Sun, Fei Wu, and Kun Kuang. 2023a.Focus-aware response generation in inquiry conversation. In Findings of the Association for Computational Linguistics: ACL 2023. 12585–12599.492023bWu etal.Wu, Zhou, Liu, Lu, Liu, Zhang, Sun, Wu, and KuangWu etal. (2023b)wu2023precedentYiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b.Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration.arXiv preprint arXiv:2310.09241 (2023).502024XinyuLin and ChuaXinyuLin and ChuaXinyuLin and Chua (2024)lin2023temporallyJujia Zhao Yongqi Li FuliFeng XinyuLin, WenjieWang and Tat-Seng Chua. 2024.Temporally and Distributionally Robust Optimization for Cold-start Recommendation. In AAAI.512022bYan etal.Yan, Wang, Zhang, Li, Xu, and ZhengYan etal. (2022b)ref:apg_rs1Bencheng Yan, Pengjie Wang, Kai Zhang, Feng Li, Jian Xu, and Bo Zheng. 2022b.APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction. In Advances in Neural Information Processing Systems.522022aYan etal.Yan, Niu, Gu, Wu, Tang, Hua, Lyu, and ChenYan etal. (2022a)ref:edge_cloud2Yikai Yan, Chaoyue Niu, Renjie Gu, Fan Wu, Shaojie Tang, Lifeng Hua, Chengfei Lyu, and Guihai Chen. 2022a.On-Device Learning for Model Personalization with Large-Scale Cloud-Coordinated Domain Adaption. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 2180–2190.532022aYao etal.Yao, Wang, Ding, Chen, Han, Zhou, and YangYao etal. (2022a)ref:edge_cloudJiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, and Hongxia Yang. 2022a.Device-cloud Collaborative Recommendation via Meta Controller. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 4353–4362.542022bYao etal.Yao, Zhang, Yao, Wang, Ma, Zhang, Chu, Ji, Jia, Shen, etal.Yao etal. (2022b)ref:edge_cloud_surveyJiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, etal. 2022b.Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI.IEEE Transactions on Knowledge and Data Engineering (2022).552022aZhang etal.Zhang, Kuang, Chen, Liu, Wu, and XiaoZhang etal. (2022a)zhang2022fairnessFengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. 2022a.Fairness-aware contrastive learning with partially annotated sensitive attributes. In The Eleventh International Conference on Learning Representations.562023bZhang etal.Zhang, Kuang, Chen, You, Shen, Xiao, Zhang, Wu, Wu, Zhuang, etal.Zhang etal. (2023b)zhang2023federatedFengda Zhang, Kun Kuang, Long Chen, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Fei Wu, Yueting Zhuang, etal. 2023b.Federated unsupervised representation learning.Frontiers of Information Technology & Electronic Engineering 24, 8 (2023), 1181–1193.572023aZhang etal.Zhang, Feng, Kuang, Zhang, Zhao, Yang, Chua, and WuZhang etal. (2023a)zhangsy2023personalizedShengyu Zhang, Fuli Feng, Kun Kuang, Wenqiao Zhang, Zhou Zhao, Hongxia Yang, Tat-Seng Chua, and Fei Wu. 2023a.Personalized Latent Structure Learning for Recommendation.IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).582020Zhang etal.Zhang, Jiang, Wang, Kuang, Zhao, Zhu, Yu, Yang, and WuZhang etal. (2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. 2020.DeVLBert: Learning Deconfounded Visio-Linguistic Representations. In MM '20: The 28th ACM International Conference on Multimedia. ACM, 4373–4382.592023cZhang etal.Zhang, Liu, Zeng, Ooi, Tang, and ZhuangZhang etal. (2023c)zhang2023learningWenqiao Zhang, Changshuo Liu, Lingze Zeng, Bengchin Ooi, Siliang Tang, and Yueting Zhuang. 2023c.Learning in Imperfect Environment: Multi-Label Classification with Long-Tailed Distribution and Partial Labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1423–1432.602024Zhang and LvZhang and LvZhang and Lv (2024)zhang2024revisitingWenqiao Zhang and Zheqi Lv. 2024.Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.612021Zhang etal.Zhang, Shi, Guo, Zhang, Cai, Li, Luo, and ZhuangZhang etal. (2021)zhang2021magicWenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, and Yueting Zhuang. 2021.MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning.arXiv preprint arXiv:2112.06558 (2021).622022bZhang etal.Zhang, Zhu, Hallinan, Zhang, Makmur, Cai, and OoiZhang etal. (2022b)zhang2022boostmisWenqiao Zhang, Lei Zhu, James Hallinan, Shengyu Zhang, Andrew Makmur, Qingpeng Cai, and BengChin Ooi. 2022b.Boostmis: Boosting medical image semi-supervised learning with adaptive pseudo labeling and informative active annotation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20666–20676.632024Zhang etal.Zhang, Zhu, Song, Koniusz, King, etal.Zhang etal. (2024)zhang2024mitigatingYifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, Irwin King, etal. 2024.Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective.Advances in Neural Information Processing Systems 36 (2024).642018Zhou etal.Zhou, Zhu, Song, Fan, Zhu, Ma, Yan, Jin, Li, and GaiZhou etal. (2018)ref:dinGuorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018.Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1059–1068.652023aZhu etal.Zhu, Li, Shao, Hao, Wu, Kuang, Xiao, and WuZhu etal. (2023a)DBLP:conf/mm/ZhuL0HWK0W23Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, and Chao Wu. 2023a.Generalized Universal Domain Adaptation with Generative Flow Networks. In ACM Multimedia. ACM, 8304–8315.662023bZhu etal.Zhu, Li, Yuan, Li, Kuang, and WuZhu etal. (2023b)zhu2023universalDidi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, and Chao Wu. 2023b.Universal domain adaptation via compressive attention matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6974–6985.Appendix AAppendixAAAppendix AAppendix AAppendixAAppendixThis is the Appendix for ``Intelligent Model Update Strategy for Sequential Recommendation''.A.1subsectionA.1A.1§A.1A.1Supplementary MethodA.1Supplementary MethodA.1.1subsubsectionA.1.1A.1.1§A.1.1A.1.1Notations and DefinitionsA.1.1Notations and DefinitionsWe summarize notations and definitions in the Table2.Table 2Table22Table 22Notations and DefinitionsTable 2Notations and DefinitionsNotationDefinitionuUservItemsBehavior sequencedEdge=D{d(i)}=i1NdSet of edgesSH(i), SR(i), SMRDHistory samples, Real-time samples, MRD samplesNd, NH(i) and NR(i)The number of edges, The number of history data, The number of real-time dataΘg, Θd, ΘMRDParameters of the global cloud model, Parameters of the local edge modelMg(⋅;Θg), Md(i)(⋅;Θd(i)), Mc(i)t(SMRD;ΘMRD)Global cloud model, Local edge recommendation model, Local edge control modelLrec, LMRDLoss function of recommendation, Loss function of mis-recommendationΩFeature extractorA.1.2subsubsectionA.1.2A.1.2§A.1.2A.1.2Optimization TargetA.1.2Optimization TargetTo describe it in the simplest way, we assume that the set of the edges is =D{d(i)}=i1Nd, the set updated using the baseline method is =D′u{d(i)}=i1N′u, the set updated using our method is =Du{d(i)}=i1Nu. Nd, N′u, and Nu are the amount of the D, D′u and Du, respectively. The communication upper bound is set to Nthres. Suppose the ground-truth value y, and the prediction of the baseline models ^y′, and the prediction of our model ^y are row vectors.Therefore, our optimization target is to obtain the highest performance of the model while limiting the upper bound of the communication frequency.(21)Equation2121Maximize^yyT,Maximize^yyT,Subject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≤NuN′u,≤NuN′u,⊂DuD.⊂DuD.In this case, the improvement of our method is =Δ-^yyT^y′yT.Or it can also be regarded as reducing the communication frequency without degrading performance.(22)Equation2222MinimizeNuMinimizeNuSubject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≥^yyT^y′yT,≥^yyT^y′yT,⊂DuD⊂DuDIn this case, the improvement of our method is =Δ-NNu.A.2subsectionA.2A.2§A.2A.2Supplementary Experimental ResultsA.2Supplementary Experimental ResultsA.2.1subsubsectionA.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We evaluate IntellectReq and baselines on Amazon CDs(CDs)2footnote22footnote 2https://jmcauley.ucsd.edu/data/amazon/, Amazon Electronic(Electronic)2, Douban Book(Book)3footnote33footnote 3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, three widely used public benchmarks in the recommendation tasks, Table3 shows the statistics. Following conventional practice, all user-item pairs in the dataset are treated as positive samples. To conduct sequential recommendation experiments, we arrange the items clicked by the user into a sequence in the order of timestamps.We also refer to (Zhou etal., 2018; Kang and McAuley, 2018; Hidasi etal., 2016), which is negatively sampled at :14 and :199 in the training set and testing set, respectively. Negative sampling considers all user-item pairs that do not exist in the dataset as negative samples.Table 3Table33Table 33Statistics of Datasets.Table 3Statistics of Datasets.Amazon CDsAmazon ElectronicDouban Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsectionA.2.2A.2.2§A.2.2A.2.2Evaluation MetricsA.2.2Evaluation MetricsIn the experiments, we use the widely adopted AUC, Logloss, HitRate and NDCG as the metrics to evaluate model performance.They are defined by the following equations.(23)Equation2323=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,(24)Equation2424=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,(25)Equation2525=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),(26)Equation2626=HitRate@K1|U|∑∈uU1(≤Ru,guK),=HitRate@K1|U|∑∈uU1(≤Ru,guK),In the equation above, 1(⋅) is the indicator function. f is the model to be evaluated. Ru,gu is the rank predicted by the model for the ground truth item gu and user u. DT, DF is the positive and negative testing sample set, respectively, and DuT, DuF is the positive and negative testing sample set for user u respectively.A.2.3subsubsectionA.2.3A.2.3§A.2.3A.2.3Request Frequency and ThresholdA.2.3Request Frequency and ThresholdFigure10 shows that the relationship between request frequency and different threshold.Figure 10Figure1010Figure 1010Request frequency w.r.t. different thresholdFigure 10Request frequency w.r.t. different thresholdA.3subsectionA.3A.3§A.3A.3Training Procedure and Inference ProcedureA.3Training Procedure and Inference ProcedureIn this section, we describe the overall pipeline in detail in conjunction with Figure11.Figure 11Figure1111Figure 1111The overall pipeline of our proposed IntellectReq.Figure 11The overall pipeline of our proposed IntellectReq.1. Training Procedure① We first pre-trained a EC-CDR framework, and EC-CDR can use data to generate model parameters.② MRD training procedure. 1) Construct the MRD dataset. We assume that the time at this time is T, and then we use the model parameters generated by the data at moment =t0 under the EC-CDR framework, and the model is applied to the data at the current moment =tT. At this point, we can get a prediction result ^y, compare ^y with y to get whether the model do mis-recommendation. We then repeat the data used for parameter generation from =t0 to =t-T1, which constructs an MRD dataset. It contains three columns, namely: the data used for parameter generation (x1), the current data (x2), and whether it do mis-recommendation (yMRD). 2) Train MRD. MRD is a fully connected neural network that takes x1 and x2 as input and fits the mis-recommendation label yMRD. And then we get the MRD. MRD can be used to determine whether the model parameters generated using the data at a certain moment before are still valid for the current data. The prediction result output by MRD can be simply considered as Mis-Recommendation Score (MRS).③ DM training procedure. We map the data into a Gaussian distribution through the Conditional-VAE method, and then sample the feature vector from the distribution to complete the next-item prediction task, that is, to predict the item that the user will click next. Then we can get DM. DM can calculate multiple next-items by sampling from the distribution multiple times, which can be used to calculate Uncertainty.④ Joint training procedure of MRD and DM. We use a fully connected neural network, denoted as f(⋅), and use MRS and Uncertainty as input to fit yMRD in the MRD dataset, which is the Mis-Recommendation Label.2. Inference ProcedureThe MRS is calculated using all recent user data on the cloud, and the threshold of the MRS is determined according to the load. Then send this threshold to each edge. The edge has updated the model at a certain moment =tn,<nT before, and now whether it is necessary to continue to update the model at moment =tT, that is, whether the model is invalid for the current data distribution? We only need to input the MRD and Uncertainty calculated by the data at the moment =tn data and the data at the moment =tT into f(⋅) for determine. In fact, what we output is a invalid degree, which is a continuous value between 0 and 1. Whether to update the edge model depends on the threshold calculated on the cloud based on the load.A.4subsectionA.4A.4§A.4A.4Hyperparameters and Training SchedulesA.4Hyperparameters and Training SchedulesWe summarize the hyperparameters and training schedules of IntellectReq on the three datasets in Table4.Table 4Table44Table 44Hyperparameters and training schedules.Table 4Hyperparameters and training schedules.DatasetParametersSetting Amazon CDsAmazon ElectronicDouban Book GPUTesla A100OptimizerAdam Learning rate0.001 Batch size1024 Sequence length30 the Dimension of z1×64N32n10A.4.1subsubsectionA.4.1A.4.1§A.4.1A.4.1Impact on the Real World.A.4.1Impact on the Real World.A case based on a dynamic model from the previous moment is as follows. If it were based on a on-edge static model, the improvement would be much more significant.We found some more intuitive data and examples to show the challenge and IntellectReq's impact on the real world:Table 5Table55Table 55IntellectReq's Impact on Real World.Table 5IntellectReq's Impact on Real World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) We calculate the number of bytes and FLOPs required to update a parameter. Bytes: 48.5kB, FLOPs: 1.53M. That is, updating a model on the edge requires the transmission of 48.5kB data through edge-cloud communication, and consumes 1.53M computing power of the cloud model. (2) According to the report, Google processes 99,000 clicks per second, so it needs to transmit 48.5kB∗99k=4.69GB per second, and consume 1.53M∗99k=152.46G of computing power in the cloud server. Alibaba processes 1,150,000 clicks per second, so it needs to transmit 48.5kB∗1150k=53.19GB per second, and consume 1.53M∗1150k=1.68T of computing power in the cloud server. These are not the peak value yet. Obviously, such a huge loan and computing power consumption make it hard to update the model for edges every moment especially at peak times. (3) Sometimes, the distributed nature of clouds today may can afford the computational volume, since it can call enough servers to support edge-cloud collaboration. However, the huge resource consumption is impractical in real-scenario. Besides, according to our empirical study, our IntellectReq can bring 21.4% resource saving when the performance is the same using the APG framework. Under the DUET framework, IntellectReq can bring 16.6% resource saving when the performance is the same. Summing up, IntellectReq can save 19% resources on average, which is very helpful for cost control and can facilitate the EC-CDR development in practice. The following Table5 is the comparison between our method IntellectReq and EC-CDR in the amount of transmitted data and the computing power consumed on the cloud. (4) During the peak period, resources will be tight and cause freezes or even crashes. This is still in the case that EC-CDR has not been deployed yet, that is, the edge-cloud communication only performs the most basic user data transmission. Then, IntellectReq can achieve better performance than EC-CDR under any resource limit ϵ, or to achieve the performance that EC-CDR requires +ϵ%19 of resources to achieve.EC-CDR:⏟Mg({SH(i)}=i1Nd;Θg)GlobalCloudModel[Parameters]Data←----→⏟Md(i)(SR(i);Θd(i))LocalEdgeModel.𝑇𝑜𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒𝑤ℎ𝑒𝑡ℎ𝑒𝑟𝑡𝑜𝑟𝑒𝑞𝑢𝑒𝑠𝑡𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠𝑓𝑟𝑜𝑚𝑡ℎ𝑒𝑐𝑙𝑜𝑢𝑑𝐼𝑛𝑡𝑒𝑙𝑙𝑒𝑐𝑡𝑅𝑒𝑞𝑢𝑠𝑒𝑠S__MRDtolearnaMis−RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC−CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation2626:IntellectReqControl→⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModel⏟(Mg[Parameters]Data←----→Md(i))-ECCDR.3.2subsection3.23.2§3.23.2IntellectReq3.2IntellectReqFigure3 is the overview of Recommendation model, EC-CDR, and IntellectReq framework which consists of Mis-Recommendation Detector (MRD) and Distribution Mapper (DM) to achieve high revenue under any requested budget.We first introduce the EC-CDR, and then present IntellectReq, which we propose to overcome the frequent and low-revenue drawbacks of EC-CDR requests. IntellectReq achieves high communication revenue under any edge-cloud communication budget in EC-CDR. MRD can determine whether to request parameters from the cloud model Mg or to use the edge recommendation model Md based on real-time data SR(i). DM helps MRD make further judgments by discriminating the uncertainty in the recommendation model's understanding of data semantics.3.2.1subsubsection3.2.13.2.1§3.2.13.2.1The framework of EC-CDR3.2.1The framework of EC-CDRIn EC-CDR, a recommendation model with a static layers and a dynamic layers will be trained for the global cloud model development. The goal of the EC-CDR can thus be formulated as the following optimization problem:(3)Equation33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),where Dce(⋅;Θgb) denotes the cross-entropy between two probability distributions, frec(⋅) denotes the dynamic layers of the recommendation model, Ω(x(j)H(i);Θgb) is the static layers extracting features from x(j)H(i). EC-CDR is decoupled edge-model with a ``static layers'' and ``dynamic layers'' training scheme to achieve better personalization.The primary factor enhancing the on-edge model's generalization to real-time data through EC-CDR is its dynamic layers. Upon completion of training, the static layers' parameters remain static, denoted as Θgb, as determined by Eq.3. Conversely, the dynamic layers' parameters, represented by Θgc, are dynamically generated based on real-time data by the cloud generator.In edge inference, the cloud-based parameter generator uses the real-time click sequence ∈s(j,t)R(i)SR(i) to generate the parameters,(4)Equation44=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,where Eshare(⋅) represents the shared encoder. L(n)layer(⋅) is a linear layer used to adjust e(j,t)R(i) which is the output of Eshare(⋅) to the nth dynamic layer features. e(j,t)R(i) means embedding vector generated by the click sequence at the moment t.The cloud generator model treats the parameters of a fully-connected layer as a matrix ∈K(n)R×NinNout, where Nin and Nout represent the number of input neurons and output neurons of the nth fully-connected layers, respectively.Then the cloud generator model g(⋅) converts the real-time click sequence s(j,t)R(i) into dynamic layers parameters ^Θgc by =K(n)R(i)g(n)(e(n)R(i)). Since the following content no longer needs the superscript (n), we simplify g(⋅) to =g(⋅)L(n)layer(Eshared(⋅)). Then, the edge recommendation model updates the parameters and makes inference as follows,(5)Equation55=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).Figure 4Figure44Figure 44Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.Figure 4Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.In cloud training, all layers of the cloud generator model are optimized together with the static layers of the primary model that are conditioned on the global history data=SH(i){x(j)H(i),y(j)H(i)}=j1NH(i), instead of optimizing the static layers of the primary model first and then optimizing the cloud generator model.The cloud generator model loss functionis defined as follows:(6)Equation66EC-CDR could improve the generalization ability of the edge recommendation model.However, EC-CDR could not be easily deployed in a real-world environment due to the high request frequency and low communication revenue. Under the EC-CDR framework, the moment t in Eq.5 is equal to the current moment T, which means that the edge and the cloud communicate at every moment.In fact, however, a lot of communication is unnecessary because ^Θgc generated by the sequence earlier may work well enough.To alleviate this issue, we propose MRD and DM to solve the problem when the edge recommendation model should update parameters.3.2.2subsubsection3.2.23.2.2§3.2.23.2.2Mis-Recommendation Detector3.2.2Mis-Recommendation DetectorThe training procedure of MRD can be divided into two stages.The goal of the first stage is to construct a MRD dataset SC based on the user's historical data without any additional annotation to train the MRD.The cloud model Mg and the edge model Md are trained in the same way as the training procedure of EC-CDR.(7)Equation77Here, we set t′≤t=T. That is, when generating model parameters, we use the click sequence s(j,t′)R(i) at the previous moment t′, but this model is used to predict the current data. Then we can get c(j,t,t′) that means whether the sample be correctly predicted based on the prediction ^y(j,t,t′)R(i) and the ground-truth y(j,t)R(i).(8)Equation88=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,(9)Equation99LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then we construct the new mis-recommendation training dataset as follows:=SMRD(i){s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T.Then, a dynamic layers fMRD(⋅) can be trained on SMRD(i) according to the Eq.9, where =tT and the loss function l(⋅) is cross entropy.3.2.3subsubsection3.2.33.2.3§3.2.33.2.3Distribution Mapper3.2.3Distribution MapperAlthough the MRD could determine when to update edge parameters, it is insufficient to simply map a click sequence to a certain representation in a high-dimensional space due to ubiquitous noises in click sequences. So we design the DM as Figure4 make it possible to directly perceive the data distribution shift and determine the uncertainty in the recommendation model's understanding of the semantics of the data.Inspired by Conditional-VAE, we map click sequences to normal distributions. Different from the MRD, the DM module consider a variable u(j,t) to denote the uncertainty in Eq.9 as:(10)Equation1010LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The uncertainty variable u(j,t) shows the recommendation model's understanding of the semantics of the data. DM focuses on how to learn such uncertainty variable u(j,t).Distribution Mapper consists of three components as shown in the figure in Appendix, namely the Prior Network P(⋅) (PRN), the Posterior Network Q(⋅) (PON), and the Next-item Prediction Network f(⋅) (NPN) that includes the static layers Ω(⋅) and dynamic layers fNPN(⋅). Note that Ω(⋅) here is the same as Ω(⋅) in section3.2.1 and 3.2.2, so there is almost no additional resource consumption. We will first introduce the three components separately, and then introduce the training procedure and inference procedure.Prior Network.The Prior Network with weights Θprior and Θ′prior maps the representation of a click sequence s(j,t) to a prior probability distribution. We set this prior probability distribution as a normal distribution with mean μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN and variance σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior Network.The Posterior Network Ωpost with weights Θpost and Θ′post can enhance the training of the Prior Network by introducing posterior information. It maps the representation concatenated by the representation of the next-item r(j,t) and of the click sequence s(j,t) to a normal distribution.we define the posterior probability distribution as a normal distribution with mean μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN and variance σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item Prediction Network.The Next-item Prediction Network with weights Θc predicts the embedding of the next item ^r(j,t) to be clicked based on the user's click sequence s(j,t) as follows,(13)Equation1313=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training Procedure.In the training procedure, two losses need to be constructed, one is recommendation prediction loss Lrec and the other is distribution difference loss Ldist.Like the way that most recommendation models are trained, Lrec uses the binary cross-entropy loss function l(⋅) to penalize the difference between ^y(j,t) and y(j,t). The difference is that here NPN uses the feature z sampled from the prior distribution Q to replace e in formula 5In addition, Ldist penalizes the difference between the posterior distribution Q and the prior distribution P with the help of the Kullback-Leibler divergence.Ldist "pulls" the posterior and prior distributions towards each other. The formulas for Lrec and Ldist are as follows,(14)Equation1414=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],(15)Equation1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, we optimize DM according to,(16)Equation1616=L(y(j,t),s(j,t))+Lrec⋅βLdist.=L(y(j,t),s(j,t))+Lrec⋅βLdist.During training, the weights are randomly initialized.Inference Procedure. In the inference procedure, the posterior network will be removed from DM because there is no posterior information during the inference procedure. Uncertainty variable u(j,t) is calculated by the multi-sampling outputs as follows:(17)Equation1717=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),where n denotes the sampling times. Specifically, we consider the dimension of ^r(j,t) is ×N1, ^ri(j,t),(k) as the k-th value of the ^ri(j,t) vector, and calculate the variance as follows:(18)Equation1818var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).3.2.4subsubsection3.2.43.2.4§3.2.43.2.4On-edge Model Update3.2.4On-edge Model UpdateMis-Recommendation Score(MRS) is a variable calculated based on the output of MRD and DM, which directly affects whether the model needs to be updated.(19)Equation1919=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation2020=Update1(≤MRSThreshold)=Update1(≤MRSThreshold)In the equation above, 1(⋅) is the indicator function.To get the threshold, we need to collect user data for a period of time, then get the MRS values corresponding to these data on the cloud and sort them, and then set the threshold according to the load of the cloud server. For example, if the load of the cloud server needs to be reduced by 90%, that is, when the load is only 10% of the previous value, only the minimum 10% position value needs to be sent to each edge as the threshold. During inference, each edge determines whether it needs to update the edge model based on equation19 and 20, that is, whether it needs to request new parameters.4section44§44Experiments4ExperimentsWe conducted extensive experiments to evaluate the effectiveness and generalizability of the proposedIntellectReq. We put part of the experimental setup, results and analysis in the Appendix.4.1subsection4.14.1§4.14.1Experimental Setup.4.1Experimental Setup.Datasets. We evaluate on Amazon CDs(CDs), Amazon Electronic(Electronic), Douban Book(Book),three widely used public benchmarks in the recommendation tasks.Evaluation MetricsIn the experiments, we use the widely adopted AUC1footnote11footnote 1Note 0.1% absolute AUC gain is regarded as significant for the CTR task(Yan etal., 2022b; Lv etal., 2023b; Kang and McAuley, 2018; Zhou etal., 2018), UAUC1, HitRate and NDCG as the metrics.Baselines.To verify the applicability, the following representative sequential modeling approaches are implemented and compared with the counterparts combined with the proposed method.DUET(Lv etal., 2023b) and APG(Yan etal., 2022b) are SOTA of EC-CDR, which generate parameters through the edge-cloud collaboration for different tasks. With the cloud generator model, the on-edge model could generalize well to the current data distribution in each session without training on the edge. GRU4Rec(Hidasi etal., 2016), DIN(Zhou etal., 2018), and SASRec(Kang and McAuley, 2018) are three of the most widely used sequential recommendation methods in the academia and industry, which respectively introduce GRU, Attention, and Self-Attention into the recommendation system. LOF(Breunig etal., 2000) and OC-SVM(Tax, 2002) estimate the density of a given point via the ratio of the local reachability of its neighbors and itself. They can be used to detect changes in the distribution of click sequences. For the IntellectReq, we consider SASRec as edge-model unless otherwise stated, but note that IntellectReq broadly applies to lots of sequential recommendation model such as DIN, GRU4Rec, etc.Evaluation Metrics.We use the widely adopted AUC, HitRate, and NDCG as the metrics to evaluate model performance.4.2subsection4.24.2§4.24.2Experimental Results.4.2Experimental Results.4.2.1subsubsection4.2.14.2.1§4.2.14.2.1Quantitative Results.4.2.1Quantitative Results.Figure 5Figure55Figure 55Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 5Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 6Figure66Figure 66Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 6Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 7Figure77Figure 77Performance w.r.t. Request Frequency based on on-edge static model.Figure 7Performance w.r.t. Request Frequency based on on-edge static model.Figure5, 6, and 7 summarize the quantitative results of our framework and other methods on CDs and Electronic datasets.The experiments are based on state-of-the-art EC-CDR frameworks such as DUET and APG. As shown in Figure5-6, we combine the parameter generation framework with three sequential recommendation models, DIN, GRU4Rec, SASRec. We evaluate these methods with AUC and UAUC metrics on CDs and Book datasets.We have the following findings:(1) If all edge-model updated at -t1 moment, the DUET framework (DUET) and the APG framework (APG) can be viewed as the upper bound of performance for all methods since DUET and APG are evaluated with fixed 100% request frequency and other methods are evaluated with increasing frequency. If all edge-model are the same as the cloud pretrained model, IntellectReq can even beat DUET, which indicates that in EC-CDR, not all edges need to be updated at every moment. In fact, model parameters generated by user data at some moments can be detrimental to performance.Note that directly comparing the other methods with DUET and APG is not fair as DUET and APG use the fixed 100% request frequency, which could not be deployed in lower request frequency.(2) The random request method (DUET (Random), APG (Random)) works well with any request budget. However, it does not give the optimal request scheme for any request budget in most cases (such as Row.1). The correlation between its performance and Request Frequency tends to be linear.The performances of random request methods are unstable and unpredictable, where these methods outperform other methods in a few cases.(3) LOF (DUET (LOF), APG (LOF)) and OC-SVM (DUET (OC-SVM), APG (OC-SVM)) are two methods that could be used as simple baselines to make the optimal request scheme under a special and specific request budget.However, they have two weaknesses. One is that they consume a lot of resources and thus significantly reduce the calculation speed. The other is they can only work under a specific request budget instead of an arbitrary request budget. For example, in the first line, the Request Frequency of OC-SVM can only be(4) In most cases, our IntellectReq can make the optimal request scheme under any request budget.4.2.2subsubsection4.2.24.2.2§4.2.24.2.2Mis-recommendation score and profit.4.2.2Mis-recommendation score and profit.Figure 8Figure88Figure 88Mis-Recommendation Score and Revenue.Figure 8Mis-Recommendation Score and Revenue.To further study the effectiveness of MDR, we visualize the request timing and revenue in Figure8.As shown in Figure8, we analyze the relationship between request and revenue.Every 100 users were assigned to one of 15 groups, which were selected at random. The Figure is divided into three parts, with the first part used to assess the request and the second and third parts used to assess the benefit.The metric used here is Mis-Recommendation Score (MRS) to evaluate the request revenue. MRS is a metric to measure whether a recommendation will be made in error.In other words, it can be viewed as an evaluation of the model's generalization ability.The probabilities of a mis-recommendation and requesting model parameters are higher and the score is lower.•item1st itemIntellectReq predicts the MRS based on the uncertainty and the click sequences at the moment t and -t1.•item2nd itemDUET (Random) randomly selects edges to request the cloud model to update the parameters of the edges. At this point, MRS can be considered as an arbitrary constant. We take the average value of IntellectReq's MRS as the MRS value.•item3rd itemDUET (w. Request) represents all edge-model be updated at the moment t.•item4th itemDUET (w/o. Request) represents no edge-model be updated at moment -t1 in Figure5 and 6, represents no edge-model be updated at moment 0 in Figure7.•item5th itemRequest Revenue represents the revenue, that is, DUET (w. Request) curve minus DUET (w/o Request).From Figure8, we have the following observations:(1) The trends of MRS and DUET Revenue are typically in the opposite direction, which means that when the MRS value is low, IntellectReq tends to believe that the edge's model cannot generalize well to the current data distribution. Then, the IntellectReq uses the most recent real-time data to request model parameters. As a result, the revenue at this time is frequently positive and relatively high. When the MRS value is high, IntellectReq tends to continue using the model that was updated at the previous moment -t1 instead of t because it believes that the model on the edge can generalize well to the current data distribution. The revenue is frequently low and negative if the model parameters are requested at this point.(2) Since the MRS of DUET (Random) is constant, it cannot predict the revenue of each request. The performance curve changes randomly because of the irregular arrangement order of groups.4.2.3subsubsection4.2.34.2.3§4.2.34.2.3Ablation Study.4.2.3Ablation Study.Figure 9Figure99Figure 99Ablation study on model architecture.Figure 9Ablation study on model architecture.We conducted an ablation study to show the effectiveness of different components in IntellectReq. The results are shown in Figure9.We use w/o. and w. to denote without and with, respectively. From the table, we have the following findings:•item1st itemIntellectReq means both DM and MRD are used.•item2nd item(w/o. DM) means MRD is used but DM is not used.•item3rd item(w/o. MRD) means DM is used but MRD is not used.From the figure and table, we have the following observations:(1) Generally, IntellectReq achieves the best performance with different evaluation metrics in most cases, demonstrating the effectiveness of IntellectReq.(2) When the request frequency is small, the difference between IntellectReq and IntellectReq (w/o. DM) is not immediately apparent, as shown in Fig.9(d). The difference becomes more noticeable when the Request Frequency increases within a certain range. In brief, the difference exhibits the traits of first getting smaller, then larger, and finally smaller.4.2.4subsubsection4.2.44.2.4§4.2.44.2.4Time and Space Cost.4.2.4Time and Space Cost.Most edges have limited storage space, so the on-edge model must be small and sufficient.The edge's computing power is rather limited, and the completion of the recommendation task on the edge requires lots of real-time processing, so the model deployed on the edge must be both simple and fast. Therefore, we analyze whether these methods are controllable and highly profitable based on the DUET framework, and additional time and space resource consumption under this framework is shown in Table1.Table 1Table11Table 11Extra Time and Space Cost on CDs dataset.Table 1Extra Time and Space Cost on CDs dataset.MethodControllableProfitableTime CostSpace Cost (Param.)LOF✗✓/225s11.3ms≈0OC-SVM✗✓/160s9.7ms≈0Random✓✗/0s0.8ms≈0IntellectReq✓✓/11s7.9ms≈5.06kIn the time consumption column, signal ``/'' separates the time consumption of cloud preprocessing and edge inference. Cloud preprocessing means that the cloud server first calculates the MRS value based on recent user data and then determines the threshold based on the communication budget of the cloud server and sends it to the edge. Edge inference refers to the MRS calculated when the click sequence on the edge is updated. The experimental results show that: 1) In terms of time consumption, both cloud preprocessing and edge inference are the fastest for random requests, followed by our IntellectReq. LOF and OC-SVM are the slowest. 2) In terms of space consumption, random, LOF, and OC-SVM can all be regarded as requiring no additional space consumption. In contrast, our method requires the additional deployment of 5.06k parameters on the edge. 3) Random and our IntellectReq can be realized in terms of controllability. It means that edge-cloud communication can be realized under the condition of an arbitrary communication budget, while LOF and OC-SVM cannot. 4) In terms of high yield, LOF, OC-SVM, and our IntellectReq can all be achieved, but random requests cannot.In general, our IntellectReq only requires minimal time consumption (does not affect real-time performance) and space consumption (easy to deploy for smart edges) and can take into account controllability and high profitability.5section55§55Conclusion5ConclusionIn our paper, we argue that under the EC-CDR framework, most communications requesting new parameters for the cloud-based recommendation system are unnecessary due to stable on-edge data distributions. We introduced IntellectReq, a low-resource solution for calculating request value and ensuring adaptive, high-revenue edge-cloud communication. IntellectReq employs a novel edge intelligence task to identify out-of-domain data and uses real-time user behavior mapping to a normal distribution, alongside multi-sampling outputs, to assess the edge model's adaptability to user actions. Our extensive tests across three public benchmarks confirm IntellectReq's efficiency and broad applicability, promoting a more effective edge-cloud collaborative recommendation approach.ACKNOWLEDGMENTThis work was supported by National Key R&D Program of China (No. 2022ZD0119100), Scientific Research Fund of Zhejiang Provincial Education Department (Y202353679), National Natural Science Foundation of China (No. 62376243, 62037001, U20A20387), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Project by Shanghai AI Laboratory (P22KS00111) and Program of Zhejiang Province Science and Technology (2022C01044)References1(1) 22000Breunig etal.Breunig, Kriegel, Ng, and SanderBreunig etal. (2000)ref:lofMarkusM Breunig, Hans-Peter Kriegel, RaymondT Ng, and Jörg Sander. 2000.LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data. 93–104.32020Cai etal.Cai, Gan, Zhu, and HanCai etal. (2020)ref:finetuningHan Cai, Chuang Gan, Ligeng Zhu, and Song Han. 2020.Tinytl: Reduce activations, not trainable parameters for efficient on-device learning.(2020).42023Cao etal.Cao, Zheng, Hassanzadeh, Lamba, Liu, and LiuCao etal. (2023)cao2023_10.1145/3604237.3626868Defu Cao, Yixiang Zheng, Parisa Hassanzadeh, Simran Lamba, Xiaomo Liu, and Yan Liu. 2023.Large Scale Financial Time Series Forecasting with Multi-faceted Model. In Proceedings of the Fourth ACM International Conference on AI in Finance (<conf-loc>, <city>Brooklyn</city>, <state>NY</state>, <country>USA</country>, </conf-loc>) (ICAIF '23). Association for Computing Machinery, New York, NY, USA, 472–480.https://doi.org/10.1145/3604237.362686852021Chang etal.Chang, Gao, Zheng, Hui, Niu, Song, Jin, and LiChang etal. (2021)ref:surgeJianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021.Sequential recommendation with graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 378–387.62021Chen and WangChen and WangChen and Wang (2021)chen2021multiZhengyu Chen and Donglin Wang. 2021.Multi-Initialization Meta-Learning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1390–1394.72022Chen etal.Chen, Xiao, and KuangChen etal. (2022)chen2022baZhengyu Chen, Teng Xiao, and Kun Kuang. 2022.BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 3012–3024.82023Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2023)chen2023learning_arxivZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2023.Learning to Reweight for Graph Neural Network.arXiv preprint arXiv:2312.12475 (2023).92024Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2024)chen2023learningZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2024.Learning to Reweight for Generalizable Graph Neural Network.Proceedings of the AAAI conference on artificial intelligence (2024).102021Chen etal.Chen, Xu, and WangChen etal. (2021)chen2021deepZhengyu Chen, Ziqing Xu, and Donglin Wang. 2021.Deep transfer tensor decomposition with orthogonal constraint for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol.35. 4010–4018.112017Ha etal.Ha, Dai, and LeHa etal. (2017)ref:hypernetwork_pioneering1David Ha, Andrew Dai, and QuocV Le. 2017.Hypernetworks.(2017).122016Hidasi etal.Hidasi, Karatzoglou, Baltrunas, and TikkHidasi etal. (2016)ref:gru4recBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016.Session-based recommendations with recurrent neural networks.International Conference on Learning Representations 2016 (2016).132023Huang etal.Huang, Huang, Yang, Ren, Liu, Li, Ye, Liu, Yin, and ZhaoHuang etal. (2023)huang2023makeRongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023.Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.arXiv preprint arXiv:2301.12661 (2023).142022aHuang etal.Huang, Lam, Wang, Su, Yu, Ren, and ZhaoHuang etal. (2022a)DBLP:conf/ijcai/HuangL0S00Z22Rongjie Huang, Max W.Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022a.FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. In IJCAI. ijcai.org, 4157–4163.152022bHuang etal.Huang, Ren, Liu, Cui, and ZhaoHuang etal. (2022b)huang2022generspeechRongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2022b.Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech.Advances in Neural Information Processing Systems 35 (2022), 10970–10983.162023aJi etal.Ji, Liang, Liao, Fei, and FengJi etal. (2023a)ji2023partialWei Ji, Renjie Liang, Lizi Liao, Hao Fei, and Fuli Feng. 2023a.Partial Annotation-based Video Moment Retrieval via Iterative Learning. In Proceedings of the 31th ACM international conference on Multimedia.172023bJi etal.Ji, Liu, Zhang, Wei, and WangJi etal. (2023b)ji2023onlineWei Ji, Xiangyan Liu, An Zhang, Yinwei Wei, and Xiang Wang. 2023b.Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation. In Proceedings of the 31th ACM international conference on Multimedia.182018Kang and McAuleyKang and McAuleyKang and McAuley (2018)ref:sasrecWang-Cheng Kang and Julian McAuley. 2018.Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 197–206.192021Latifi etal.Latifi, Mauro, and JannachLatifi etal. (2021)latifi2021sessionSara Latifi, Noemi Mauro, and Dietmar Jannach. 2021.Session-aware recommendation: A surprising quest for the state-of-the-art.Information Sciences 573 (2021), 291–315.202023eLi etal.Li, Xiao, Zheng, Wu, and CuiLi etal. (2023e)li2023propensityHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, and Peng Cui. 2023e.Propensity matters: Measuring and enhancing balancing for recommendation. In International Conference on Machine Learning. PMLR, 20182–20194.212024Li etal.Li, Xiao, Zheng, Wu, Geng, Chen, and CuiLi etal. (2024)li2024kernelHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, Zhi Geng, Xu Chen, and Peng Cui. 2024.Debiased Collaborative Filtering with Kernel-based Causal Balancing. In International Conference on Learning Representations.222022aLi etal.Li, He, Wei, Qian, Zhu, Xie, Zhuang, Tian, and TangLi etal. (2022a)li2022fineJuncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie, Yueting Zhuang, Qi Tian, and Siliang Tang. 2022a.Fine-grained semantically aligned vision-language pre-training.Advances in neural information processing systems 35 (2022), 7290–7303.232023aLi etal.Li, Pan, Ge, Gao, Zhang, Ji, Zhang, Chua, Tang, and ZhuangLi etal. (2023a)li2023finetuningJuncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, and Yueting Zhuang. 2023a.Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions.arXiv preprint arXiv:2308.04152 (2023).242023bLi etal.Li, Wang, Qin, Ji, and LiangLi etal. (2023b)lili_10.1145/3581783.3611847Li Li, Chenwei Wang, You Qin, Wei Ji, and Renjie Liang. 2023b.Biased-Predicate Annotation Identification via Unbiased Visual Predicate Representation. In Proceedings of the 31st ACM International Conference on Multimedia (<conf-loc>, <city>Ottawa ON</city>, <country>Canada</country>, </conf-loc>) (MM '23). Association for Computing Machinery, New York, NY, USA, 4410–4420.https://doi.org/10.1145/3581783.3611847252023dLi etal.Li, Wang, Zhang, Miao, Zhao, Zhang, Ji, and WuLi etal. (2023d)li2023winnerMengze Li, Han Wang, Wenqiao Zhang, Jiaxu Miao, Zhou Zhao, Shengyu Zhang, Wei Ji, and Fei Wu. 2023d.Winner: Weakly-supervised hierarchical decomposition and alignment for spatio-temporal video grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23090–23099.262023cLi etal.Li, Wang, Xu, Han, Zhang, Zhao, Miao, Zhang, Pu, and WuLi etal. (2023c)li2023multiMengze Li, Tianbao Wang, Jiahe Xu, Kairong Han, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Shiliang Pu, and Fei Wu. 2023c.Multi-modal Action Chain Abductive Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 4617–4628.272022bLi etal.Li, Wang, Zhang, Zhang, Zhao, Miao, Zhang, Tan, Wang, Wang, etal.Li etal. (2022b)li2022endMengze Li, Tianbao Wang, Haoyu Zhang, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Wenming Tan, Jin Wang, Peng Wang, etal. 2022b.End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 8707–8717.282023Lin etal.Lin, Xu, Wang, Zhang, and FengLin etal. (2023)lin2023mitigatingXin-Yu Lin, Yi-Yan Xu, Wen-Jie Wang, Yang Zhang, and Fu-Li Feng. 2023.Mitigating Spurious Correlations for Self-supervised Recommendation.Machine Intelligence Research 20, 2 (2023), 263–275.292022Lv etal.Lv, Wang, Zhang, Kuang, Yang, and WuLv etal. (2022)lv2022personalizingZheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022.Personalizing Intervened Network for Long-tailed Sequential User Behavior Modeling.arXiv preprint arXiv:2208.09130 (2022).302023aLv etal.Lv, Wang, Zhang, Zhang, Kuang, and WuLv etal. (2023a)lv2023parametersZheqi Lv, Feng Wang, Shengyu Zhang, Wenqiao Zhang, Kun Kuang, and Fei Wu. 2023a.Parameters Efficient Fine-Tuning for Long-Tailed Sequential Recommendation. In CAAI International Conference on Artificial Intelligence. Springer, 442–459.312023bLv etal.Lv, Zhang, Zhang, Kuang, Wang, Wang, Chen, Shen, Yang, Ooi, and WuLv etal. (2023b)ref:duetZheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, BengChin Ooi, and Fei Wu. 2023b.DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization. In Proceedings of the ACM Web Conference 2023.322021Marfoq etal.Marfoq, Neglia, Bellet, Kameni, and VidalMarfoq etal. (2021)ref:federated_multi_task2Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. 2021.Federated multi-task learning under a mixture of distributions.Advances in Neural Information Processing Systems 34 (2021), 15434–15447.332017McMahan etal.McMahan, Moore, Ramage, Hampson, and yArcasMcMahan etal. (2017)ref:federated_fedavgBrendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and BlaiseAguera y Arcas. 2017.Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.342021Mills etal.Mills, Hu, and MinMills etal. (2021)ref:federated_multi_taskJed Mills, Jia Hu, and Geyong Min. 2021.Multi-task federated learning for personalised deep neural networks in edge computing.IEEE Transactions on Parallel and Distributed Systems 33, 3 (2021), 630–641.352022Qian etal.Qian, Xu, Lv, Zhang, Jiang, Liu, Zeng, Chua, and WuQian etal. (2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022Xufeng Qian, Yue Xu, Fuyu Lv, Shengyu Zhang, Ziwen Jiang, Qingwen Liu, Xiaoyi Zeng, Tat-Seng Chua, and Fei Wu. 2022.Intelligent Request Strategy Design in Recommender System. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 3772–3782.362020Qin etal.Qin, Lv, Wang, Hu, and WuQin etal. (2020)qin2020healthFang-Yu Qin, Zhe-Qi Lv, Dan-Ni Wang, Bo Hu, and Chao Wu. 2020.Health status prediction for the elderly based on machine learning.Archives of gerontology and geriatrics 90 (2020), 104121.372010Rendle etal.Rendle, Freudenthaler, and Schmidt-ThiemeRendle etal. (2010)ref:fpmcSteffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010.Factorizing personalized Markov chains for next-basket recommendation.the web conference (2010).382019Sanh etal.Sanh, Debut, Chaumond, and WolfSanh etal. (2019)ref:disitllVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019.DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.arXiv preprint arXiv:1910.01108 (2019).392023aSu etal.Su, Chen, Lin, Li, Liu, and ZhengSu etal. (2023a)su2023personalizedJiajie Su, Chaochao Chen, Zibin Lin, Xi Li, Weiming Liu, and Xiaolin Zheng. 2023a.Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation. In Proceedings of the 31st ACM International Conference on Multimedia. 6321–6331.402023bSu etal.Su, Chen, Liu, Wu, Zheng, and LyuSu etal. (2023b)su2023enhancingJiajie Su, Chaochao Chen, Weiming Liu, Fei Wu, Xiaolin Zheng, and Haoming Lyu. 2023b.Enhancing Hierarchy-Aware Graph Networks with Deep Dual Clustering for Session-based Recommendation. In Proceedings of the ACM Web Conference 2023. 165–176.412019Sun etal.Sun, Liu, Wu, Pei, Lin, Ou, and JiangSun etal. (2019)ref:bert4recFei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019.BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.422024aTang etal.Tang, Lv, Zhang, Wu, and KuangTang etal. (2024a)tang2024modelgptZihao Tang, Zheqi Lv, Shengyu Zhang, Fei Wu, and Kun Kuang. 2024a.ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation.arXiv preprint arXiv:2402.12408 (2024).432024bTang etal.Tang, Lv, Zhang, Zhou, Duan, Kuang, and WuTang etal. (2024b)tang2024oodkdZihao Tang, Zheqi Lv, Shengyu Zhang, Yifan Zhou, Xinyu Duan, Kun Kuang, and Fei Wu. 2024b.AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation. In 12th International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net.https://openreview.net/forum?id=fcqWJ8JgMR442002TaxTaxTax (2002)ref:ocsvmDavid MartinusJohannes Tax. 2002.One-class classification: Concept learning in the absence of counter-examples.(2002).452023Tong etal.Tong, Yuan, Zhang, Zhu, Zhang, Wu, and KuangTong etal. (2023)DBLP:conf/kdd/TongYZZZWK23Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, and Kun Kuang. 2023.Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization. In KDD. ACM, 2189–2200.462017Wang etal.Wang, Cui, Wang, Pei, Zhu, and YangWang etal. (2017)wang2017communityXiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017.Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, Vol.31.472019Wu etal.Wu, Tang, Zhu, Wang, Xie, and TanWu etal. (2019)ref:srgnnShu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019.Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol.33. 346–353.482023aWu etal.Wu, Lu, Zhang, Jatowt, Feng, Sun, Wu, and KuangWu etal. (2023a)wu2023focusYiquan Wu, Weiming Lu, Yating Zhang, Adam Jatowt, Jun Feng, Changlong Sun, Fei Wu, and Kun Kuang. 2023a.Focus-aware response generation in inquiry conversation. In Findings of the Association for Computational Linguistics: ACL 2023. 12585–12599.492023bWu etal.Wu, Zhou, Liu, Lu, Liu, Zhang, Sun, Wu, and KuangWu etal. (2023b)wu2023precedentYiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b.Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration.arXiv preprint arXiv:2310.09241 (2023).502024XinyuLin and ChuaXinyuLin and ChuaXinyuLin and Chua (2024)lin2023temporallyJujia Zhao Yongqi Li FuliFeng XinyuLin, WenjieWang and Tat-Seng Chua. 2024.Temporally and Distributionally Robust Optimization for Cold-start Recommendation. In AAAI.512022bYan etal.Yan, Wang, Zhang, Li, Xu, and ZhengYan etal. (2022b)ref:apg_rs1Bencheng Yan, Pengjie Wang, Kai Zhang, Feng Li, Jian Xu, and Bo Zheng. 2022b.APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction. In Advances in Neural Information Processing Systems.522022aYan etal.Yan, Niu, Gu, Wu, Tang, Hua, Lyu, and ChenYan etal. (2022a)ref:edge_cloud2Yikai Yan, Chaoyue Niu, Renjie Gu, Fan Wu, Shaojie Tang, Lifeng Hua, Chengfei Lyu, and Guihai Chen. 2022a.On-Device Learning for Model Personalization with Large-Scale Cloud-Coordinated Domain Adaption. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 2180–2190.532022aYao etal.Yao, Wang, Ding, Chen, Han, Zhou, and YangYao etal. (2022a)ref:edge_cloudJiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, and Hongxia Yang. 2022a.Device-cloud Collaborative Recommendation via Meta Controller. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 4353–4362.542022bYao etal.Yao, Zhang, Yao, Wang, Ma, Zhang, Chu, Ji, Jia, Shen, etal.Yao etal. (2022b)ref:edge_cloud_surveyJiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, etal. 2022b.Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI.IEEE Transactions on Knowledge and Data Engineering (2022).552022aZhang etal.Zhang, Kuang, Chen, Liu, Wu, and XiaoZhang etal. (2022a)zhang2022fairnessFengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. 2022a.Fairness-aware contrastive learning with partially annotated sensitive attributes. In The Eleventh International Conference on Learning Representations.562023bZhang etal.Zhang, Kuang, Chen, You, Shen, Xiao, Zhang, Wu, Wu, Zhuang, etal.Zhang etal. (2023b)zhang2023federatedFengda Zhang, Kun Kuang, Long Chen, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Fei Wu, Yueting Zhuang, etal. 2023b.Federated unsupervised representation learning.Frontiers of Information Technology & Electronic Engineering 24, 8 (2023), 1181–1193.572023aZhang etal.Zhang, Feng, Kuang, Zhang, Zhao, Yang, Chua, and WuZhang etal. (2023a)zhangsy2023personalizedShengyu Zhang, Fuli Feng, Kun Kuang, Wenqiao Zhang, Zhou Zhao, Hongxia Yang, Tat-Seng Chua, and Fei Wu. 2023a.Personalized Latent Structure Learning for Recommendation.IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).582020Zhang etal.Zhang, Jiang, Wang, Kuang, Zhao, Zhu, Yu, Yang, and WuZhang etal. (2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. 2020.DeVLBert: Learning Deconfounded Visio-Linguistic Representations. In MM '20: The 28th ACM International Conference on Multimedia. ACM, 4373–4382.592023cZhang etal.Zhang, Liu, Zeng, Ooi, Tang, and ZhuangZhang etal. (2023c)zhang2023learningWenqiao Zhang, Changshuo Liu, Lingze Zeng, Bengchin Ooi, Siliang Tang, and Yueting Zhuang. 2023c.Learning in Imperfect Environment: Multi-Label Classification with Long-Tailed Distribution and Partial Labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1423–1432.602024Zhang and LvZhang and LvZhang and Lv (2024)zhang2024revisitingWenqiao Zhang and Zheqi Lv. 2024.Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.612021Zhang etal.Zhang, Shi, Guo, Zhang, Cai, Li, Luo, and ZhuangZhang etal. (2021)zhang2021magicWenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, and Yueting Zhuang. 2021.MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning.arXiv preprint arXiv:2112.06558 (2021).622022bZhang etal.Zhang, Zhu, Hallinan, Zhang, Makmur, Cai, and OoiZhang etal. (2022b)zhang2022boostmisWenqiao Zhang, Lei Zhu, James Hallinan, Shengyu Zhang, Andrew Makmur, Qingpeng Cai, and BengChin Ooi. 2022b.Boostmis: Boosting medical image semi-supervised learning with adaptive pseudo labeling and informative active annotation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20666–20676.632024Zhang etal.Zhang, Zhu, Song, Koniusz, King, etal.Zhang etal. (2024)zhang2024mitigatingYifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, Irwin King, etal. 2024.Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective.Advances in Neural Information Processing Systems 36 (2024).642018Zhou etal.Zhou, Zhu, Song, Fan, Zhu, Ma, Yan, Jin, Li, and GaiZhou etal. (2018)ref:dinGuorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018.Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1059–1068.652023aZhu etal.Zhu, Li, Shao, Hao, Wu, Kuang, Xiao, and WuZhu etal. (2023a)DBLP:conf/mm/ZhuL0HWK0W23Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, and Chao Wu. 2023a.Generalized Universal Domain Adaptation with Generative Flow Networks. In ACM Multimedia. ACM, 8304–8315.662023bZhu etal.Zhu, Li, Yuan, Li, Kuang, and WuZhu etal. (2023b)zhu2023universalDidi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, and Chao Wu. 2023b.Universal domain adaptation via compressive attention matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6974–6985.Appendix AAppendixAAAppendix AAppendix AAppendixAAppendixThis is the Appendix for ``Intelligent Model Update Strategy for Sequential Recommendation''.A.1subsectionA.1A.1§A.1A.1Supplementary MethodA.1Supplementary MethodA.1.1subsubsectionA.1.1A.1.1§A.1.1A.1.1Notations and DefinitionsA.1.1Notations and DefinitionsWe summarize notations and definitions in the Table2.Table 2Table22Table 22Notations and DefinitionsTable 2Notations and DefinitionsNotationDefinitionuUservItemsBehavior sequencedEdge=D{d(i)}=i1NdSet of edgesSH(i), SR(i), SMRDHistory samples, Real-time samples, MRD samplesNd, NH(i) and NR(i)The number of edges, The number of history data, The number of real-time dataΘg, Θd, ΘMRDParameters of the global cloud model, Parameters of the local edge modelMg(⋅;Θg), Md(i)(⋅;Θd(i)), Mc(i)t(SMRD;ΘMRD)Global cloud model, Local edge recommendation model, Local edge control modelLrec, LMRDLoss function of recommendation, Loss function of mis-recommendationΩFeature extractorA.1.2subsubsectionA.1.2A.1.2§A.1.2A.1.2Optimization TargetA.1.2Optimization TargetTo describe it in the simplest way, we assume that the set of the edges is =D{d(i)}=i1Nd, the set updated using the baseline method is =D′u{d(i)}=i1N′u, the set updated using our method is =Du{d(i)}=i1Nu. Nd, N′u, and Nu are the amount of the D, D′u and Du, respectively. The communication upper bound is set to Nthres. Suppose the ground-truth value y, and the prediction of the baseline models ^y′, and the prediction of our model ^y are row vectors.Therefore, our optimization target is to obtain the highest performance of the model while limiting the upper bound of the communication frequency.(21)Equation2121Maximize^yyT,Maximize^yyT,Subject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≤NuN′u,≤NuN′u,⊂DuD.⊂DuD.In this case, the improvement of our method is =Δ-^yyT^y′yT.Or it can also be regarded as reducing the communication frequency without degrading performance.(22)Equation2222MinimizeNuMinimizeNuSubject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≥^yyT^y′yT,≥^yyT^y′yT,⊂DuD⊂DuDIn this case, the improvement of our method is =Δ-NNu.A.2subsectionA.2A.2§A.2A.2Supplementary Experimental ResultsA.2Supplementary Experimental ResultsA.2.1subsubsectionA.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We evaluate IntellectReq and baselines on Amazon CDs(CDs)2footnote22footnote 2https://jmcauley.ucsd.edu/data/amazon/, Amazon Electronic(Electronic)2, Douban Book(Book)3footnote33footnote 3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, three widely used public benchmarks in the recommendation tasks, Table3 shows the statistics. Following conventional practice, all user-item pairs in the dataset are treated as positive samples. To conduct sequential recommendation experiments, we arrange the items clicked by the user into a sequence in the order of timestamps.We also refer to (Zhou etal., 2018; Kang and McAuley, 2018; Hidasi etal., 2016), which is negatively sampled at :14 and :199 in the training set and testing set, respectively. Negative sampling considers all user-item pairs that do not exist in the dataset as negative samples.Table 3Table33Table 33Statistics of Datasets.Table 3Statistics of Datasets.Amazon CDsAmazon ElectronicDouban Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsectionA.2.2A.2.2§A.2.2A.2.2Evaluation MetricsA.2.2Evaluation MetricsIn the experiments, we use the widely adopted AUC, Logloss, HitRate and NDCG as the metrics to evaluate model performance.They are defined by the following equations.(23)Equation2323=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,(24)Equation2424=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,(25)Equation2525=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),(26)Equation2626=HitRate@K1|U|∑∈uU1(≤Ru,guK),=HitRate@K1|U|∑∈uU1(≤Ru,guK),In the equation above, 1(⋅) is the indicator function. f is the model to be evaluated. Ru,gu is the rank predicted by the model for the ground truth item gu and user u. DT, DF is the positive and negative testing sample set, respectively, and DuT, DuF is the positive and negative testing sample set for user u respectively.A.2.3subsubsectionA.2.3A.2.3§A.2.3A.2.3Request Frequency and ThresholdA.2.3Request Frequency and ThresholdFigure10 shows that the relationship between request frequency and different threshold.Figure 10Figure1010Figure 1010Request frequency w.r.t. different thresholdFigure 10Request frequency w.r.t. different thresholdA.3subsectionA.3A.3§A.3A.3Training Procedure and Inference ProcedureA.3Training Procedure and Inference ProcedureIn this section, we describe the overall pipeline in detail in conjunction with Figure11.Figure 11Figure1111Figure 1111The overall pipeline of our proposed IntellectReq.Figure 11The overall pipeline of our proposed IntellectReq.1. Training Procedure① We first pre-trained a EC-CDR framework, and EC-CDR can use data to generate model parameters.② MRD training procedure. 1) Construct the MRD dataset. We assume that the time at this time is T, and then we use the model parameters generated by the data at moment =t0 under the EC-CDR framework, and the model is applied to the data at the current moment =tT. At this point, we can get a prediction result ^y, compare ^y with y to get whether the model do mis-recommendation. We then repeat the data used for parameter generation from =t0 to =t-T1, which constructs an MRD dataset. It contains three columns, namely: the data used for parameter generation (x1), the current data (x2), and whether it do mis-recommendation (yMRD). 2) Train MRD. MRD is a fully connected neural network that takes x1 and x2 as input and fits the mis-recommendation label yMRD. And then we get the MRD. MRD can be used to determine whether the model parameters generated using the data at a certain moment before are still valid for the current data. The prediction result output by MRD can be simply considered as Mis-Recommendation Score (MRS).③ DM training procedure. We map the data into a Gaussian distribution through the Conditional-VAE method, and then sample the feature vector from the distribution to complete the next-item prediction task, that is, to predict the item that the user will click next. Then we can get DM. DM can calculate multiple next-items by sampling from the distribution multiple times, which can be used to calculate Uncertainty.④ Joint training procedure of MRD and DM. We use a fully connected neural network, denoted as f(⋅), and use MRS and Uncertainty as input to fit yMRD in the MRD dataset, which is the Mis-Recommendation Label.2. Inference ProcedureThe MRS is calculated using all recent user data on the cloud, and the threshold of the MRS is determined according to the load. Then send this threshold to each edge. The edge has updated the model at a certain moment =tn,<nT before, and now whether it is necessary to continue to update the model at moment =tT, that is, whether the model is invalid for the current data distribution? We only need to input the MRD and Uncertainty calculated by the data at the moment =tn data and the data at the moment =tT into f(⋅) for determine. In fact, what we output is a invalid degree, which is a continuous value between 0 and 1. Whether to update the edge model depends on the threshold calculated on the cloud based on the load.A.4subsectionA.4A.4§A.4A.4Hyperparameters and Training SchedulesA.4Hyperparameters and Training SchedulesWe summarize the hyperparameters and training schedules of IntellectReq on the three datasets in Table4.Table 4Table44Table 44Hyperparameters and training schedules.Table 4Hyperparameters and training schedules.DatasetParametersSetting Amazon CDsAmazon ElectronicDouban Book GPUTesla A100OptimizerAdam Learning rate0.001 Batch size1024 Sequence length30 the Dimension of z1×64N32n10A.4.1subsubsectionA.4.1A.4.1§A.4.1A.4.1Impact on the Real World.A.4.1Impact on the Real World.A case based on a dynamic model from the previous moment is as follows. If it were based on a on-edge static model, the improvement would be much more significant.We found some more intuitive data and examples to show the challenge and IntellectReq's impact on the real world:Table 5Table55Table 55IntellectReq's Impact on Real World.Table 5IntellectReq's Impact on Real World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) We calculate the number of bytes and FLOPs required to update a parameter. Bytes: 48.5kB, FLOPs: 1.53M. That is, updating a model on the edge requires the transmission of 48.5kB data through edge-cloud communication, and consumes 1.53M computing power of the cloud model. (2) According to the report, Google processes 99,000 clicks per second, so it needs to transmit 48.5kB∗99k=4.69GB per second, and consume 1.53M∗99k=152.46G of computing power in the cloud server. Alibaba processes 1,150,000 clicks per second, so it needs to transmit 48.5kB∗1150k=53.19GB per second, and consume 1.53M∗1150k=1.68T of computing power in the cloud server. These are not the peak value yet. Obviously, such a huge loan and computing power consumption make it hard to update the model for edges every moment especially at peak times. (3) Sometimes, the distributed nature of clouds today may can afford the computational volume, since it can call enough servers to support edge-cloud collaboration. However, the huge resource consumption is impractical in real-scenario. Besides, according to our empirical study, our IntellectReq can bring 21.4% resource saving when the performance is the same using the APG framework. Under the DUET framework, IntellectReq can bring 16.6% resource saving when the performance is the same. Summing up, IntellectReq can save 19% resources on average, which is very helpful for cost control and can facilitate the EC-CDR development in practice. The following Table5 is the comparison between our method IntellectReq and EC-CDR in the amount of transmitted data and the computing power consumed on the cloud. (4) During the peak period, resources will be tight and cause freezes or even crashes. This is still in the case that EC-CDR has not been deployed yet, that is, the edge-cloud communication only performs the most basic user data transmission. Then, IntellectReq can achieve better performance than EC-CDR under any resource limit ϵ, or to achieve the performance that EC-CDR requires +ϵ%19 of resources to achieve.EC-CDR : under⏟ start_ARG caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ( { caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ; roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT roman_Global roman_Cloud roman_Model end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG end_ARG start_ARG [ end_ARG end_RELOP italic_P italic_a italic_r italic_a italic_m italic_e italic_t italic_e italic_r italic_s ] Data start_RELOP start_ROW start_CELL ← end_CELL end_ROW start_ROW start_CELL - end_CELL end_ROW end_RELOP start_RELOP start_ROW start_CELL - end_CELL end_ROW start_ROW start_CELL - end_CELL end_ROW end_RELOP start_RELOP start_ROW start_CELL - end_CELL end_ROW start_ROW start_CELL → end_CELL end_ROW end_RELOP under⏟ start_ARG caligraphic_M start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ; roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT roman_Local roman_Edge roman_Model end_POSTSUBSCRIPT . italic_T italic_o italic_d italic_e italic_t italic_e italic_r italic_m italic_i italic_n italic_e italic_w italic_h italic_e italic_t italic_h italic_e italic_r italic_t italic_o italic_r italic_e italic_q italic_u italic_e italic_s italic_t italic_p italic_a italic_r italic_a italic_m italic_e italic_t italic_e italic_r italic_s italic_f italic_r italic_o italic_m italic_t italic_h italic_e italic_c italic_l italic_o italic_u italic_d , italic_I italic_n italic_t italic_e italic_l italic_l italic_e italic_c italic_t italic_R italic_e italic_q italic_u italic_s italic_e italic_s caligraphic_S__MRDtolearnaMis-RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC-CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation caligraphic_2626IntellectReq:⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModelControl→⏟(Mg[Parameters]Data←----→Md(i))EC-CDR.3.2subsection caligraphic_3.23.2§3.23.2IntellectReq3.2IntellectReqFigure caligraphic_is caligraphic_the caligraphic_overview caligraphic_of caligraphic_Recommendation caligraphic_model, caligraphic_EC-CDR, caligraphic_and caligraphic_IntellectReq caligraphic_framework caligraphic_which caligraphic_consists caligraphic_of caligraphic_Mis-Recommendation caligraphic_Detector caligraphic_(MRD) caligraphic_and caligraphic_Distribution caligraphic_Mapper caligraphic_(DM) caligraphic_to caligraphic_achieve caligraphic_high caligraphic_revenue caligraphic_under caligraphic_any caligraphic_requested caligraphic_budget. caligraphic_We caligraphic_first caligraphic_introduce caligraphic_the caligraphic_EC-CDR, caligraphic_and caligraphic_then caligraphic_present caligraphic_IntellectReq, caligraphic_which caligraphic_we caligraphic_propose caligraphic_to caligraphic_overcome caligraphic_the caligraphic_frequent caligraphic_and caligraphic_low-revenue caligraphic_drawbacks caligraphic_of caligraphic_EC-CDR caligraphic_requests. caligraphic_IntellectReq caligraphic_achieves caligraphic_high caligraphic_communication caligraphic_revenue caligraphic_under caligraphic_any caligraphic_edge-cloud caligraphic_communication caligraphic_budget caligraphic_in caligraphic_EC-CDR. caligraphic_MRD caligraphic_can caligraphic_determine caligraphic_whether caligraphic_to caligraphic_request caligraphic_parameters caligraphic_from caligraphic_the caligraphic_cloud caligraphic_model caligraphic_Mg caligraphic_or caligraphic_to caligraphic_use caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_Md caligraphic_based caligraphic_on caligraphic_real-time caligraphic_data caligraphic_SR(i). caligraphic_DM caligraphic_helps caligraphic_MRD caligraphic_make caligraphic_further caligraphic_judgments caligraphic_by caligraphic_discriminating caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_data caligraphic_semantics.3.2.1subsubsection caligraphic_3.2.13.2.1§3.2.13.2.1The caligraphic_framework caligraphic_of caligraphic_EC-CDR3.2.1The caligraphic_framework caligraphic_of caligraphic_EC-CDRIn caligraphic_EC-CDR, caligraphic_a caligraphic_recommendation caligraphic_model caligraphic_with caligraphic_a caligraphic_static caligraphic_layers caligraphic_and caligraphic_a caligraphic_dynamic caligraphic_layers caligraphic_will caligraphic_be caligraphic_trained caligraphic_for caligraphic_the caligraphic_global caligraphic_cloud caligraphic_model caligraphic_development. caligraphic_The caligraphic_goal caligraphic_of caligraphic_the caligraphic_EC-CDR caligraphic_can caligraphic_thus caligraphic_be caligraphic_formulated caligraphic_as caligraphic_the caligraphic_following caligraphic_optimization caligraphic_problem:(3)Equation caligraphic_33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑i=1Nd∑j=1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑i=1Nd∑j=1NR(i)Dce(y(j)H(i),^y(j)H(i)),where caligraphic_Dce(⋅;Θgb) caligraphic_denotes caligraphic_the caligraphic_cross-entropy caligraphic_between caligraphic_two caligraphic_probability caligraphic_distributions, caligraphic_frec(⋅) caligraphic_denotes caligraphic_the caligraphic_dynamic caligraphic_layers caligraphic_of caligraphic_the caligraphic_recommendation caligraphic_model, caligraphic_Ω(x(j)H(i);Θgb) caligraphic_is caligraphic_the caligraphic_static caligraphic_layers caligraphic_extracting caligraphic_features caligraphic_from caligraphic_x(j)H(i). caligraphic_EC-CDR caligraphic_is caligraphic_decoupled caligraphic_edge-model caligraphic_with caligraphic_a caligraphic_``static caligraphic_layers'' caligraphic_and caligraphic_``dynamic caligraphic_layers'' caligraphic_training caligraphic_scheme caligraphic_to caligraphic_achieve caligraphic_better caligraphic_personalization. caligraphic_The caligraphic_primary caligraphic_factor caligraphic_enhancing caligraphic_the caligraphic_on-edge caligraphic_model's caligraphic_generalization caligraphic_to caligraphic_real-time caligraphic_data caligraphic_through caligraphic_EC-CDR caligraphic_is caligraphic_its caligraphic_dynamic caligraphic_layers. caligraphic_Upon caligraphic_completion caligraphic_of caligraphic_training, caligraphic_the caligraphic_static caligraphic_layers' caligraphic_parameters caligraphic_remain caligraphic_static, caligraphic_denoted caligraphic_as caligraphic_Θgb, caligraphic_as caligraphic_determined caligraphic_by caligraphic_Eq. caligraphic_. caligraphic_Conversely, caligraphic_the caligraphic_dynamic caligraphic_layers' caligraphic_parameters, caligraphic_represented caligraphic_by caligraphic_Θgc, caligraphic_are caligraphic_dynamically caligraphic_generated caligraphic_based caligraphic_on caligraphic_real-time caligraphic_data caligraphic_by caligraphic_the caligraphic_cloud caligraphic_generator.In caligraphic_edge caligraphic_inference, caligraphic_the caligraphic_cloud-based caligraphic_parameter caligraphic_generator caligraphic_uses caligraphic_the caligraphic_real-time caligraphic_click caligraphic_sequence caligraphic_s(j,t)R(i)∈SR(i) caligraphic_to caligraphic_generate caligraphic_the caligraphic_parameters,(4)Equation caligraphic_44h(n)R(i)=L(n)layer(e(j,t)R(i)=Eshared(s(j,t)R(i))),∀n=1,⋯,Nl,h(n)R(i)=L(n)layer(e(j,t)R(i)=Eshared(s(j,t)R(i))),∀n=1,⋯,Nl,where caligraphic_Eshare(⋅) caligraphic_represents caligraphic_the caligraphic_shared caligraphic_encoder. caligraphic_L(n)layer(⋅) caligraphic_is caligraphic_a caligraphic_linear caligraphic_layer caligraphic_used caligraphic_to caligraphic_adjust caligraphic_e(j,t)R(i) caligraphic_which caligraphic_is caligraphic_the caligraphic_output caligraphic_of caligraphic_Eshare(⋅) caligraphic_to caligraphic_the caligraphic_nth caligraphic_dynamic caligraphic_layer caligraphic_features. caligraphic_e(j,t)R(i) caligraphic_means caligraphic_embedding caligraphic_vector caligraphic_generated caligraphic_by caligraphic_the caligraphic_click caligraphic_sequence caligraphic_at caligraphic_the caligraphic_moment caligraphic_t.The caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_treats caligraphic_the caligraphic_parameters caligraphic_of caligraphic_a caligraphic_fully-connected caligraphic_layer caligraphic_as caligraphic_a caligraphic_matrix caligraphic_K(n)∈RNin×Nout, caligraphic_where caligraphic_Nin caligraphic_and caligraphic_Nout caligraphic_represent caligraphic_the caligraphic_number caligraphic_of caligraphic_input caligraphic_neurons caligraphic_and caligraphic_output caligraphic_neurons caligraphic_of caligraphic_the caligraphic_nth caligraphic_fully-connected caligraphic_layers, caligraphic_respectively. caligraphic_Then caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_g(⋅) caligraphic_converts caligraphic_the caligraphic_real-time caligraphic_click caligraphic_sequence caligraphic_s(j,t)R(i) caligraphic_into caligraphic_dynamic caligraphic_layers caligraphic_parameters caligraphic_^Θgc caligraphic_by caligraphic_K(n)R(i)=g(n)(e(n)R(i)). caligraphic_Since caligraphic_the caligraphic_following caligraphic_content caligraphic_no caligraphic_longer caligraphic_needs caligraphic_the caligraphic_superscript caligraphic_(n), caligraphic_we caligraphic_simplify caligraphic_g(⋅) caligraphic_to caligraphic_g(⋅)=L(n)layer(Eshared(⋅)). caligraphic_Then, caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_updates caligraphic_the caligraphic_parameters caligraphic_and caligraphic_makes caligraphic_inference caligraphic_as caligraphic_follows,(5)Equation caligraphic_55^y(j,t)R(i)=frec(Ω(x(j,t)R(i);Θgb);^Θgc=g(s(j,t)R(i);Θp)).^y(j,t)R(i)=frec(Ω(x(j,t)R(i);Θgb);^Θgc=g(s(j,t)R(i);Θp)).Figure caligraphic_4Figure caligraphic_44Figure caligraphic_44Overview caligraphic_of caligraphic_the caligraphic_proposed caligraphic_Distribution caligraphic_Mapper. caligraphic_Training caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network, caligraphic_Posterior caligraphic_network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_Loss caligraphic_consists caligraphic_of caligraphic_the caligraphic_classification caligraphic_loss caligraphic_and caligraphic_the caligraphic_KL-Divergence caligraphic_loss. caligraphic_Inference caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_The caligraphic_uncertainty caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_output. caligraphic_Figure caligraphic_4Overview caligraphic_of caligraphic_the caligraphic_proposed caligraphic_Distribution caligraphic_Mapper. caligraphic_Training caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network, caligraphic_Posterior caligraphic_network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_Loss caligraphic_consists caligraphic_of caligraphic_the caligraphic_classification caligraphic_loss caligraphic_and caligraphic_the caligraphic_KL-Divergence caligraphic_loss. caligraphic_Inference caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_The caligraphic_uncertainty caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_output. caligraphic_In caligraphic_cloud caligraphic_training, caligraphic_all caligraphic_layers caligraphic_of caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_are caligraphic_optimized caligraphic_together caligraphic_with caligraphic_the caligraphic_static caligraphic_layers caligraphic_of caligraphic_the caligraphic_primary caligraphic_model caligraphic_that caligraphic_are caligraphic_conditioned caligraphic_on caligraphic_the caligraphic_global caligraphic_history caligraphic_data caligraphic_SH(i)={x(j)H(i),y(j)H(i)}j=1NH(i), caligraphic_instead caligraphic_of caligraphic_optimizing caligraphic_the caligraphic_static caligraphic_layers caligraphic_of caligraphic_the caligraphic_primary caligraphic_model caligraphic_first caligraphic_and caligraphic_then caligraphic_optimizing caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model. caligraphic_The caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_loss caligraphic_function caligraphic_is caligraphic_defined caligraphic_as caligraphic_follows:(6)Equation caligraphic_66EC-CDR caligraphic_could caligraphic_improve caligraphic_the caligraphic_generalization caligraphic_ability caligraphic_of caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model. caligraphic_However, caligraphic_EC-CDR caligraphic_could caligraphic_not caligraphic_be caligraphic_easily caligraphic_deployed caligraphic_in caligraphic_a caligraphic_real-world caligraphic_environment caligraphic_due caligraphic_to caligraphic_the caligraphic_high caligraphic_request caligraphic_frequency caligraphic_and caligraphic_low caligraphic_communication caligraphic_revenue. caligraphic_Under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_the caligraphic_moment caligraphic_t caligraphic_in caligraphic_Eq. caligraphic_is caligraphic_equal caligraphic_to caligraphic_the caligraphic_current caligraphic_moment caligraphic_T, caligraphic_which caligraphic_means caligraphic_that caligraphic_the caligraphic_edge caligraphic_and caligraphic_the caligraphic_cloud caligraphic_communicate caligraphic_at caligraphic_every caligraphic_moment. caligraphic_In caligraphic_fact, caligraphic_however, caligraphic_a caligraphic_lot caligraphic_of caligraphic_communication caligraphic_is caligraphic_unnecessary caligraphic_because caligraphic_^Θgc caligraphic_generated caligraphic_by caligraphic_the caligraphic_sequence caligraphic_earlier caligraphic_may caligraphic_work caligraphic_well caligraphic_enough. caligraphic_To caligraphic_alleviate caligraphic_this caligraphic_issue, caligraphic_we caligraphic_propose caligraphic_MRD caligraphic_and caligraphic_DM caligraphic_to caligraphic_solve caligraphic_the caligraphic_problem caligraphic_when caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_should caligraphic_update caligraphic_parameters.3.2.2subsubsection caligraphic_3.2.23.2.2§3.2.23.2.2Mis-Recommendation caligraphic_Detector3.2.2Mis-Recommendation caligraphic_DetectorThe caligraphic_training caligraphic_procedure caligraphic_of caligraphic_MRD caligraphic_can caligraphic_be caligraphic_divided caligraphic_into caligraphic_two caligraphic_stages. caligraphic_The caligraphic_goal caligraphic_of caligraphic_the caligraphic_first caligraphic_stage caligraphic_is caligraphic_to caligraphic_construct caligraphic_a caligraphic_MRD caligraphic_dataset caligraphic_SC caligraphic_based caligraphic_on caligraphic_the caligraphic_user's caligraphic_historical caligraphic_data caligraphic_without caligraphic_any caligraphic_additional caligraphic_annotation caligraphic_to caligraphic_train caligraphic_the caligraphic_MRD. caligraphic_The caligraphic_cloud caligraphic_model caligraphic_Mg caligraphic_and caligraphic_the caligraphic_edge caligraphic_model caligraphic_Md caligraphic_are caligraphic_trained caligraphic_in caligraphic_the caligraphic_same caligraphic_way caligraphic_as caligraphic_the caligraphic_training caligraphic_procedure caligraphic_of caligraphic_EC-CDR.(7)Equation caligraphic_77Here, caligraphic_we caligraphic_set caligraphic_t′≤t=T. caligraphic_That caligraphic_is, caligraphic_when caligraphic_generating caligraphic_model caligraphic_parameters, caligraphic_we caligraphic_use caligraphic_the caligraphic_click caligraphic_sequence caligraphic_s(j,t′)R(i) caligraphic_at caligraphic_the caligraphic_previous caligraphic_moment caligraphic_t′, caligraphic_but caligraphic_this caligraphic_model caligraphic_is caligraphic_used caligraphic_to caligraphic_predict caligraphic_the caligraphic_current caligraphic_data. caligraphic_Then caligraphic_we caligraphic_can caligraphic_get caligraphic_c(j,t,t′) caligraphic_that caligraphic_means caligraphic_whether caligraphic_the caligraphic_sample caligraphic_be caligraphic_correctly caligraphic_predicted caligraphic_based caligraphic_on caligraphic_the caligraphic_prediction caligraphic_^y(j,t,t′)R(i) caligraphic_and caligraphic_the caligraphic_ground-truth caligraphic_y(j,t)R(i).(8)Equation caligraphic_88c(j,t,t′)={1,^y(j,t,t′)R(i)=y(j,t)R(i);0,^y(j,t,t′)R(i)≠y(j,t)R(i).,c(j,t,t′)={1,^y(j,t,t′)R(i)=y(j,t)R(i);0,^y(j,t,t′)R(i)≠y(j,t)R(i).,(9)Equation caligraphic_99LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then caligraphic_we caligraphic_construct caligraphic_the caligraphic_new caligraphic_mis-recommendation caligraphic_training caligraphic_dataset caligraphic_as caligraphic_follows: caligraphic_SMRD(i)={s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T. caligraphic_Then, caligraphic_a caligraphic_dynamic caligraphic_layers caligraphic_fMRD(⋅) caligraphic_can caligraphic_be caligraphic_trained caligraphic_on caligraphic_SMRD(i) caligraphic_according caligraphic_to caligraphic_the caligraphic_Eq. caligraphic_, caligraphic_where caligraphic_t=T caligraphic_and caligraphic_the caligraphic_loss caligraphic_function caligraphic_l(⋅) caligraphic_is caligraphic_cross caligraphic_entropy.3.2.3subsubsection caligraphic_3.2.33.2.3§3.2.33.2.3Distribution caligraphic_Mapper3.2.3Distribution caligraphic_MapperAlthough caligraphic_the caligraphic_MRD caligraphic_could caligraphic_determine caligraphic_when caligraphic_to caligraphic_update caligraphic_edge caligraphic_parameters, caligraphic_it caligraphic_is caligraphic_insufficient caligraphic_to caligraphic_simply caligraphic_map caligraphic_a caligraphic_click caligraphic_sequence caligraphic_to caligraphic_a caligraphic_certain caligraphic_representation caligraphic_in caligraphic_a caligraphic_high-dimensional caligraphic_space caligraphic_due caligraphic_to caligraphic_ubiquitous caligraphic_noises caligraphic_in caligraphic_click caligraphic_sequences. caligraphic_So caligraphic_we caligraphic_design caligraphic_the caligraphic_DM caligraphic_as caligraphic_Figure caligraphic_make caligraphic_it caligraphic_possible caligraphic_to caligraphic_directly caligraphic_perceive caligraphic_the caligraphic_data caligraphic_distribution caligraphic_shift caligraphic_and caligraphic_determine caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_the caligraphic_semantics caligraphic_of caligraphic_the caligraphic_data.Inspired caligraphic_by caligraphic_Conditional-VAE, caligraphic_we caligraphic_map caligraphic_click caligraphic_sequences caligraphic_to caligraphic_normal caligraphic_distributions. caligraphic_Different caligraphic_from caligraphic_the caligraphic_MRD, caligraphic_the caligraphic_DM caligraphic_module caligraphic_consider caligraphic_a caligraphic_variable caligraphic_u(j,t) caligraphic_to caligraphic_denote caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_Eq. caligraphic_as:(10)Equation caligraphic_1010LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The caligraphic_uncertainty caligraphic_variable caligraphic_u(j,t) caligraphic_shows caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_the caligraphic_semantics caligraphic_of caligraphic_the caligraphic_data. caligraphic_DM caligraphic_focuses caligraphic_on caligraphic_how caligraphic_to caligraphic_learn caligraphic_such caligraphic_uncertainty caligraphic_variable caligraphic_u(j,t).Distribution caligraphic_Mapper caligraphic_consists caligraphic_of caligraphic_three caligraphic_components caligraphic_as caligraphic_shown caligraphic_in caligraphic_the caligraphic_figure caligraphic_in caligraphic_Appendix, caligraphic_namely caligraphic_the caligraphic_Prior caligraphic_Network caligraphic_P(⋅) caligraphic_(PRN), caligraphic_the caligraphic_Posterior caligraphic_Network caligraphic_Q(⋅) caligraphic_(PON), caligraphic_and caligraphic_the caligraphic_Next-item caligraphic_Prediction caligraphic_Network caligraphic_f(⋅) caligraphic_(NPN) caligraphic_that caligraphic_includes caligraphic_the caligraphic_static caligraphic_layers caligraphic_Ω(⋅) caligraphic_and caligraphic_dynamic caligraphic_layers caligraphic_fNPN(⋅). caligraphic_Note caligraphic_that caligraphic_Ω(⋅) caligraphic_here caligraphic_is caligraphic_the caligraphic_same caligraphic_as caligraphic_Ω(⋅) caligraphic_in caligraphic_section caligraphic_and caligraphic_, caligraphic_so caligraphic_there caligraphic_is caligraphic_almost caligraphic_no caligraphic_additional caligraphic_resource caligraphic_consumption. caligraphic_We caligraphic_will caligraphic_first caligraphic_introduce caligraphic_the caligraphic_three caligraphic_components caligraphic_separately, caligraphic_and caligraphic_then caligraphic_introduce caligraphic_the caligraphic_training caligraphic_procedure caligraphic_and caligraphic_inference caligraphic_procedure.Prior caligraphic_Network. caligraphic_The caligraphic_Prior caligraphic_Network caligraphic_with caligraphic_weights caligraphic_Θprior caligraphic_and caligraphic_Θ′prior caligraphic_maps caligraphic_the caligraphic_representation caligraphic_of caligraphic_a caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_to caligraphic_a caligraphic_prior caligraphic_probability caligraphic_distribution. caligraphic_We caligraphic_set caligraphic_this caligraphic_prior caligraphic_probability caligraphic_distribution caligraphic_as caligraphic_a caligraphic_normal caligraphic_distribution caligraphic_with caligraphic_mean caligraphic_μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN caligraphic_and caligraphic_variance caligraphic_σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation caligraphic_1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior caligraphic_Network. caligraphic_The caligraphic_Posterior caligraphic_Network caligraphic_Ωpost caligraphic_with caligraphic_weights caligraphic_Θpost caligraphic_and caligraphic_Θ′post caligraphic_can caligraphic_enhance caligraphic_the caligraphic_training caligraphic_of caligraphic_the caligraphic_Prior caligraphic_Network caligraphic_by caligraphic_introducing caligraphic_posterior caligraphic_information. caligraphic_It caligraphic_maps caligraphic_the caligraphic_representation caligraphic_concatenated caligraphic_by caligraphic_the caligraphic_representation caligraphic_of caligraphic_the caligraphic_next-item caligraphic_r(j,t) caligraphic_and caligraphic_of caligraphic_the caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_to caligraphic_a caligraphic_normal caligraphic_distribution. caligraphic_we caligraphic_define caligraphic_the caligraphic_posterior caligraphic_probability caligraphic_distribution caligraphic_as caligraphic_a caligraphic_normal caligraphic_distribution caligraphic_with caligraphic_mean caligraphic_μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN caligraphic_and caligraphic_variance caligraphic_σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation caligraphic_1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item caligraphic_Prediction caligraphic_Network. caligraphic_The caligraphic_Next-item caligraphic_Prediction caligraphic_Network caligraphic_with caligraphic_weights caligraphic_Θc caligraphic_predicts caligraphic_the caligraphic_embedding caligraphic_of caligraphic_the caligraphic_next caligraphic_item caligraphic_^r(j,t) caligraphic_to caligraphic_be caligraphic_clicked caligraphic_based caligraphic_on caligraphic_the caligraphic_user's caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_as caligraphic_follows,(13)Equation caligraphic_1313^r(j,t)=fc(e(j,t)=Ω(s(j,t);Θb),z(j,t);Θc),^r(j,t)=fc(e(j,t)=Ω(s(j,t);Θb),z(j,t);Θc),^y(j,t)=frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).^y(j,t)=frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training caligraphic_Procedure. caligraphic_In caligraphic_the caligraphic_training caligraphic_procedure, caligraphic_two caligraphic_losses caligraphic_need caligraphic_to caligraphic_be caligraphic_constructed, caligraphic_one caligraphic_is caligraphic_recommendation caligraphic_prediction caligraphic_loss caligraphic_Lrec caligraphic_and caligraphic_the caligraphic_other caligraphic_is caligraphic_distribution caligraphic_difference caligraphic_loss caligraphic_Ldist. caligraphic_Like caligraphic_the caligraphic_way caligraphic_that caligraphic_most caligraphic_recommendation caligraphic_models caligraphic_are caligraphic_trained, caligraphic_Lrec caligraphic_uses caligraphic_the caligraphic_binary caligraphic_cross-entropy caligraphic_loss caligraphic_function caligraphic_l(⋅) caligraphic_to caligraphic_penalize caligraphic_the caligraphic_difference caligraphic_between caligraphic_^y(j,t) caligraphic_and caligraphic_y(j,t). caligraphic_The caligraphic_difference caligraphic_is caligraphic_that caligraphic_here caligraphic_NPN caligraphic_uses caligraphic_the caligraphic_feature caligraphic_z caligraphic_sampled caligraphic_from caligraphic_the caligraphic_prior caligraphic_distribution caligraphic_Q caligraphic_to caligraphic_replace caligraphic_e caligraphic_in caligraphic_formula caligraphic_5 caligraphic_In caligraphic_addition, caligraphic_Ldist caligraphic_penalizes caligraphic_the caligraphic_difference caligraphic_between caligraphic_the caligraphic_posterior caligraphic_distribution caligraphic_Q caligraphic_and caligraphic_the caligraphic_prior caligraphic_distribution caligraphic_P caligraphic_with caligraphic_the caligraphic_help caligraphic_of caligraphic_the caligraphic_Kullback-Leibler caligraphic_divergence. caligraphic_Ldist caligraphic_"pulls" caligraphic_the caligraphic_posterior caligraphic_and caligraphic_prior caligraphic_distributions caligraphic_towards caligraphic_each caligraphic_other. caligraphic_The caligraphic_formulas caligraphic_for caligraphic_Lrec caligraphic_and caligraphic_Ldist caligraphic_are caligraphic_as caligraphic_follows,(14)Equation caligraphic_1414Lrec=Ez∼Q(⋅|s(j,t),y(j,t))[l(y(j,t)|^y(j,t))],Lrec=Ez∼Q(⋅|s(j,t),y(j,t))[l(y(j,t)|^y(j,t))],(15)Equation caligraphic_1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, caligraphic_we caligraphic_optimize caligraphic_DM caligraphic_according caligraphic_to,(16)Equation caligraphic_1616L(y(j,t),s(j,t))=Lrec+β⋅Ldist.L(y(j,t),s(j,t))=Lrec+β⋅Ldist.During caligraphic_training, caligraphic_the caligraphic_weights caligraphic_are caligraphic_randomly caligraphic_initialized.Inference caligraphic_Procedure. caligraphic_In caligraphic_the caligraphic_inference caligraphic_procedure, caligraphic_the caligraphic_posterior caligraphic_network caligraphic_will caligraphic_be caligraphic_removed caligraphic_from caligraphic_DM caligraphic_because caligraphic_there caligraphic_is caligraphic_no caligraphic_posterior caligraphic_information caligraphic_during caligraphic_the caligraphic_inference caligraphic_procedure. caligraphic_Uncertainty caligraphic_variable caligraphic_u(j,t) caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_outputs caligraphic_as caligraphic_follows:(17)Equation caligraphic_1717u(j,t)=var(^ri=fc(Ω(s(j,t);Θb),z(j,t)1∼n;Θc)),u(j,t)=var(^ri=fc(Ω(s(j,t);Θb),z(j,t)1∼n;Θc)),where caligraphic_n caligraphic_denotes caligraphic_the caligraphic_sampling caligraphic_times. caligraphic_Specifically, caligraphic_we caligraphic_consider caligraphic_the caligraphic_dimension caligraphic_of caligraphic_^r(j,t) caligraphic_is caligraphic_N×1, caligraphic_^ri(j,t),(k) caligraphic_as caligraphic_the caligraphic_k-th caligraphic_value caligraphic_of caligraphic_the caligraphic_^ri(j,t) caligraphic_vector, caligraphic_and caligraphic_calculate caligraphic_the caligraphic_variance caligraphic_as caligraphic_follows:(18)Equation caligraphic_1818var(^ri)=∑k=1Nvar^r1∼n(j,t),(k).var(^ri)=∑k=1Nvar^r1∼n(j,t),(k).3.2.4subsubsection caligraphic_3.2.43.2.4§3.2.43.2.4On-edge caligraphic_Model caligraphic_Update3.2.4On-edge caligraphic_Model caligraphic_UpdateMis-Recommendation caligraphic_Score caligraphic_(MRS) caligraphic_is caligraphic_a caligraphic_variable caligraphic_calculated caligraphic_based caligraphic_on caligraphic_the caligraphic_output caligraphic_of caligraphic_MRD caligraphic_and caligraphic_DM, caligraphic_which caligraphic_directly caligraphic_affects caligraphic_whether caligraphic_the caligraphic_model caligraphic_needs caligraphic_to caligraphic_be caligraphic_updated.(19)Equation caligraphic_1919MRS=1-fMRD(s(j,t),s(j,t′);ΘMRD)MRS=1-fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation caligraphic_2020Update=1(MRS≤Threshold)Update=1(MRS≤Threshold)In caligraphic_the caligraphic_equation caligraphic_above, caligraphic_1(⋅) caligraphic_is caligraphic_the caligraphic_indicator caligraphic_function. caligraphic_To caligraphic_get caligraphic_the caligraphic_threshold, caligraphic_we caligraphic_need caligraphic_to caligraphic_collect caligraphic_user caligraphic_data caligraphic_for caligraphic_a caligraphic_period caligraphic_of caligraphic_time, caligraphic_then caligraphic_get caligraphic_the caligraphic_MRS caligraphic_values caligraphic_corresponding caligraphic_to caligraphic_these caligraphic_data caligraphic_on caligraphic_the caligraphic_cloud caligraphic_and caligraphic_sort caligraphic_them, caligraphic_and caligraphic_then caligraphic_set caligraphic_the caligraphic_threshold caligraphic_according caligraphic_to caligraphic_the caligraphic_load caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_For caligraphic_example, caligraphic_if caligraphic_the caligraphic_load caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server caligraphic_needs caligraphic_to caligraphic_be caligraphic_reduced caligraphic_by caligraphic_90%, caligraphic_that caligraphic_is, caligraphic_when caligraphic_the caligraphic_load caligraphic_is caligraphic_only caligraphic_10% caligraphic_of caligraphic_the caligraphic_previous caligraphic_value, caligraphic_only caligraphic_the caligraphic_minimum caligraphic_10% caligraphic_position caligraphic_value caligraphic_needs caligraphic_to caligraphic_be caligraphic_sent caligraphic_to caligraphic_each caligraphic_edge caligraphic_as caligraphic_the caligraphic_threshold. caligraphic_During caligraphic_inference, caligraphic_each caligraphic_edge caligraphic_determines caligraphic_whether caligraphic_it caligraphic_needs caligraphic_to caligraphic_update caligraphic_the caligraphic_edge caligraphic_model caligraphic_based caligraphic_on caligraphic_equation caligraphic_and caligraphic_, caligraphic_that caligraphic_is, caligraphic_whether caligraphic_it caligraphic_needs caligraphic_to caligraphic_request caligraphic_new caligraphic_parameters.4section caligraphic_44§44Experiments4ExperimentsWe caligraphic_conducted caligraphic_extensive caligraphic_experiments caligraphic_to caligraphic_evaluate caligraphic_the caligraphic_effectiveness caligraphic_and caligraphic_generalizability caligraphic_of caligraphic_the caligraphic_proposed caligraphic_IntellectReq. caligraphic_We caligraphic_put caligraphic_part caligraphic_of caligraphic_the caligraphic_experimental caligraphic_setup, caligraphic_results caligraphic_and caligraphic_analysis caligraphic_in caligraphic_the caligraphic_Appendix.4.1subsection caligraphic_4.14.1§4.14.1Experimental caligraphic_Setup.4.1Experimental caligraphic_Setup.Datasets. caligraphic_We caligraphic_evaluate caligraphic_on caligraphic_Amazon caligraphic_CDs caligraphic_(CDs), caligraphic_Amazon caligraphic_Electronic caligraphic_(Electronic), caligraphic_Douban caligraphic_Book caligraphic_(Book), caligraphic_three caligraphic_widely caligraphic_used caligraphic_public caligraphic_benchmarks caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_tasks.Evaluation caligraphic_Metrics caligraphic_In caligraphic_the caligraphic_experiments, caligraphic_we caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC caligraphic_1footnote caligraphic_11footnote caligraphic_1Note caligraphic_0.1% caligraphic_absolute caligraphic_AUC caligraphic_gain caligraphic_is caligraphic_regarded caligraphic_as caligraphic_significant caligraphic_for caligraphic_the caligraphic_CTR caligraphic_task caligraphic_(, caligraphic_), caligraphic_UAUC, caligraphic_HitRate caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics.Baselines. caligraphic_To caligraphic_verify caligraphic_the caligraphic_applicability, caligraphic_the caligraphic_following caligraphic_representative caligraphic_sequential caligraphic_modeling caligraphic_approaches caligraphic_are caligraphic_implemented caligraphic_and caligraphic_compared caligraphic_with caligraphic_the caligraphic_counterparts caligraphic_combined caligraphic_with caligraphic_the caligraphic_proposed caligraphic_method. caligraphic_DUET caligraphic_(, caligraphic_) caligraphic_and caligraphic_APG caligraphic_(, caligraphic_) caligraphic_are caligraphic_SOTA caligraphic_of caligraphic_EC-CDR, caligraphic_which caligraphic_generate caligraphic_parameters caligraphic_through caligraphic_the caligraphic_edge-cloud caligraphic_collaboration caligraphic_for caligraphic_different caligraphic_tasks. caligraphic_With caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model, caligraphic_the caligraphic_on-edge caligraphic_model caligraphic_could caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution caligraphic_in caligraphic_each caligraphic_session caligraphic_without caligraphic_training caligraphic_on caligraphic_the caligraphic_edge. caligraphic_GRU4Rec caligraphic_(, caligraphic_), caligraphic_DIN caligraphic_(, caligraphic_), caligraphic_and caligraphic_SASRec caligraphic_(, caligraphic_) caligraphic_are caligraphic_three caligraphic_of caligraphic_the caligraphic_most caligraphic_widely caligraphic_used caligraphic_sequential caligraphic_recommendation caligraphic_methods caligraphic_in caligraphic_the caligraphic_academia caligraphic_and caligraphic_industry, caligraphic_which caligraphic_respectively caligraphic_introduce caligraphic_GRU, caligraphic_Attention, caligraphic_and caligraphic_Self-Attention caligraphic_into caligraphic_the caligraphic_recommendation caligraphic_system. caligraphic_LOF caligraphic_(, caligraphic_) caligraphic_and caligraphic_OC-SVM caligraphic_(, caligraphic_) caligraphic_estimate caligraphic_the caligraphic_density caligraphic_of caligraphic_a caligraphic_given caligraphic_point caligraphic_via caligraphic_the caligraphic_ratio caligraphic_of caligraphic_the caligraphic_local caligraphic_reachability caligraphic_of caligraphic_its caligraphic_neighbors caligraphic_and caligraphic_itself. caligraphic_They caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_detect caligraphic_changes caligraphic_in caligraphic_the caligraphic_distribution caligraphic_of caligraphic_click caligraphic_sequences. caligraphic_For caligraphic_the caligraphic_IntellectReq, caligraphic_we caligraphic_consider caligraphic_SASRec caligraphic_as caligraphic_edge-model caligraphic_unless caligraphic_otherwise caligraphic_stated, caligraphic_but caligraphic_note caligraphic_that caligraphic_IntellectReq caligraphic_broadly caligraphic_applies caligraphic_to caligraphic_lots caligraphic_of caligraphic_sequential caligraphic_recommendation caligraphic_model caligraphic_such caligraphic_as caligraphic_DIN, caligraphic_GRU4Rec, caligraphic_etc.Evaluation caligraphic_Metrics. caligraphic_We caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC, caligraphic_HitRate, caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics caligraphic_to caligraphic_evaluate caligraphic_model caligraphic_performance.4.2subsection caligraphic_4.24.2§4.24.2Experimental caligraphic_Results.4.2Experimental caligraphic_Results.4.2.1subsubsection caligraphic_4.2.14.2.1§4.2.14.2.1Quantitative caligraphic_Results.4.2.1Quantitative caligraphic_Results.Figure caligraphic_5Figure caligraphic_55Figure caligraphic_55Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_curve caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_5Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_curve caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_6Figure caligraphic_66Figure caligraphic_66Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_6Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_7Figure caligraphic_77Figure caligraphic_77Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_on-edge caligraphic_static caligraphic_model.Figure caligraphic_7Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_on-edge caligraphic_static caligraphic_model.Figure caligraphic_, caligraphic_, caligraphic_and caligraphic_summarize caligraphic_the caligraphic_quantitative caligraphic_results caligraphic_of caligraphic_our caligraphic_framework caligraphic_and caligraphic_other caligraphic_methods caligraphic_on caligraphic_CDs caligraphic_and caligraphic_Electronic caligraphic_datasets. caligraphic_The caligraphic_experiments caligraphic_are caligraphic_based caligraphic_on caligraphic_state-of-the-art caligraphic_EC-CDR caligraphic_frameworks caligraphic_such caligraphic_as caligraphic_DUET caligraphic_and caligraphic_APG. caligraphic_As caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_-, caligraphic_we caligraphic_combine caligraphic_the caligraphic_parameter caligraphic_generation caligraphic_framework caligraphic_with caligraphic_three caligraphic_sequential caligraphic_recommendation caligraphic_models, caligraphic_DIN, caligraphic_GRU4Rec, caligraphic_SASRec. caligraphic_We caligraphic_evaluate caligraphic_these caligraphic_methods caligraphic_with caligraphic_AUC caligraphic_and caligraphic_UAUC caligraphic_metrics caligraphic_on caligraphic_CDs caligraphic_and caligraphic_Book caligraphic_datasets. caligraphic_We caligraphic_have caligraphic_the caligraphic_following caligraphic_findings: caligraphic_(1) caligraphic_If caligraphic_all caligraphic_edge-model caligraphic_updated caligraphic_at caligraphic_t-1 caligraphic_moment, caligraphic_the caligraphic_DUET caligraphic_framework caligraphic_(DUET) caligraphic_and caligraphic_the caligraphic_APG caligraphic_framework caligraphic_(APG) caligraphic_can caligraphic_be caligraphic_viewed caligraphic_as caligraphic_the caligraphic_upper caligraphic_bound caligraphic_of caligraphic_performance caligraphic_for caligraphic_all caligraphic_methods caligraphic_since caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_are caligraphic_evaluated caligraphic_with caligraphic_fixed caligraphic_100% caligraphic_request caligraphic_frequency caligraphic_and caligraphic_other caligraphic_methods caligraphic_are caligraphic_evaluated caligraphic_with caligraphic_increasing caligraphic_frequency. caligraphic_If caligraphic_all caligraphic_edge-model caligraphic_are caligraphic_the caligraphic_same caligraphic_as caligraphic_the caligraphic_cloud caligraphic_pretrained caligraphic_model, caligraphic_IntellectReq caligraphic_can caligraphic_even caligraphic_beat caligraphic_DUET, caligraphic_which caligraphic_indicates caligraphic_that caligraphic_in caligraphic_EC-CDR, caligraphic_not caligraphic_all caligraphic_edges caligraphic_need caligraphic_to caligraphic_be caligraphic_updated caligraphic_at caligraphic_every caligraphic_moment. caligraphic_In caligraphic_fact, caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_by caligraphic_user caligraphic_data caligraphic_at caligraphic_some caligraphic_moments caligraphic_can caligraphic_be caligraphic_detrimental caligraphic_to caligraphic_performance. caligraphic_Note caligraphic_that caligraphic_directly caligraphic_comparing caligraphic_the caligraphic_other caligraphic_methods caligraphic_with caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_is caligraphic_not caligraphic_fair caligraphic_as caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_use caligraphic_the caligraphic_fixed caligraphic_100% caligraphic_request caligraphic_frequency, caligraphic_which caligraphic_could caligraphic_not caligraphic_be caligraphic_deployed caligraphic_in caligraphic_lower caligraphic_request caligraphic_frequency. caligraphic_(2) caligraphic_The caligraphic_random caligraphic_request caligraphic_method caligraphic_(DUET caligraphic_(Random), caligraphic_APG caligraphic_(Random)) caligraphic_works caligraphic_well caligraphic_with caligraphic_any caligraphic_request caligraphic_budget. caligraphic_However, caligraphic_it caligraphic_does caligraphic_not caligraphic_give caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_for caligraphic_any caligraphic_request caligraphic_budget caligraphic_in caligraphic_most caligraphic_cases caligraphic_(such caligraphic_as caligraphic_Row.1). caligraphic_The caligraphic_correlation caligraphic_between caligraphic_its caligraphic_performance caligraphic_and caligraphic_Request caligraphic_Frequency caligraphic_tends caligraphic_to caligraphic_be caligraphic_linear. caligraphic_The caligraphic_performances caligraphic_of caligraphic_random caligraphic_request caligraphic_methods caligraphic_are caligraphic_unstable caligraphic_and caligraphic_unpredictable, caligraphic_where caligraphic_these caligraphic_methods caligraphic_outperform caligraphic_other caligraphic_methods caligraphic_in caligraphic_a caligraphic_few caligraphic_cases. caligraphic_(3) caligraphic_LOF caligraphic_(DUET caligraphic_(LOF), caligraphic_APG caligraphic_(LOF)) caligraphic_and caligraphic_OC-SVM caligraphic_(DUET caligraphic_(OC-SVM), caligraphic_APG caligraphic_(OC-SVM)) caligraphic_are caligraphic_two caligraphic_methods caligraphic_that caligraphic_could caligraphic_be caligraphic_used caligraphic_as caligraphic_simple caligraphic_baselines caligraphic_to caligraphic_make caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_under caligraphic_a caligraphic_special caligraphic_and caligraphic_specific caligraphic_request caligraphic_budget. caligraphic_However, caligraphic_they caligraphic_have caligraphic_two caligraphic_weaknesses. caligraphic_One caligraphic_is caligraphic_that caligraphic_they caligraphic_consume caligraphic_a caligraphic_lot caligraphic_of caligraphic_resources caligraphic_and caligraphic_thus caligraphic_significantly caligraphic_reduce caligraphic_the caligraphic_calculation caligraphic_speed. caligraphic_The caligraphic_other caligraphic_is caligraphic_they caligraphic_can caligraphic_only caligraphic_work caligraphic_under caligraphic_a caligraphic_specific caligraphic_request caligraphic_budget caligraphic_instead caligraphic_of caligraphic_an caligraphic_arbitrary caligraphic_request caligraphic_budget. caligraphic_For caligraphic_example, caligraphic_in caligraphic_the caligraphic_first caligraphic_line, caligraphic_the caligraphic_Request caligraphic_Frequency caligraphic_of caligraphic_OC-SVM caligraphic_can caligraphic_only caligraphic_be caligraphic_(4) caligraphic_In caligraphic_most caligraphic_cases, caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_make caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_under caligraphic_any caligraphic_request caligraphic_budget.4.2.2subsubsection caligraphic_4.2.24.2.2§4.2.24.2.2Mis-recommendation caligraphic_score caligraphic_and caligraphic_profit.4.2.2Mis-recommendation caligraphic_score caligraphic_and caligraphic_profit.Figure caligraphic_8Figure caligraphic_88Figure caligraphic_88Mis-Recommendation caligraphic_Score caligraphic_and caligraphic_Revenue.Figure caligraphic_8Mis-Recommendation caligraphic_Score caligraphic_and caligraphic_Revenue.To caligraphic_further caligraphic_study caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_MDR, caligraphic_we caligraphic_visualize caligraphic_the caligraphic_request caligraphic_timing caligraphic_and caligraphic_revenue caligraphic_in caligraphic_Figure caligraphic_. caligraphic_As caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_, caligraphic_we caligraphic_analyze caligraphic_the caligraphic_relationship caligraphic_between caligraphic_request caligraphic_and caligraphic_revenue. caligraphic_Every caligraphic_100 caligraphic_users caligraphic_were caligraphic_assigned caligraphic_to caligraphic_one caligraphic_of caligraphic_15 caligraphic_groups, caligraphic_which caligraphic_were caligraphic_selected caligraphic_at caligraphic_random. caligraphic_The caligraphic_Figure caligraphic_is caligraphic_divided caligraphic_into caligraphic_three caligraphic_parts, caligraphic_with caligraphic_the caligraphic_first caligraphic_part caligraphic_used caligraphic_to caligraphic_assess caligraphic_the caligraphic_request caligraphic_and caligraphic_the caligraphic_second caligraphic_and caligraphic_third caligraphic_parts caligraphic_used caligraphic_to caligraphic_assess caligraphic_the caligraphic_benefit. caligraphic_The caligraphic_metric caligraphic_used caligraphic_here caligraphic_is caligraphic_Mis-Recommendation caligraphic_Score caligraphic_(MRS) caligraphic_to caligraphic_evaluate caligraphic_the caligraphic_request caligraphic_revenue. caligraphic_MRS caligraphic_is caligraphic_a caligraphic_metric caligraphic_to caligraphic_measure caligraphic_whether caligraphic_a caligraphic_recommendation caligraphic_will caligraphic_be caligraphic_made caligraphic_in caligraphic_error. caligraphic_In caligraphic_other caligraphic_words, caligraphic_it caligraphic_can caligraphic_be caligraphic_viewed caligraphic_as caligraphic_an caligraphic_evaluation caligraphic_of caligraphic_the caligraphic_model's caligraphic_generalization caligraphic_ability. caligraphic_The caligraphic_probabilities caligraphic_of caligraphic_a caligraphic_mis-recommendation caligraphic_and caligraphic_requesting caligraphic_model caligraphic_parameters caligraphic_are caligraphic_higher caligraphic_and caligraphic_the caligraphic_score caligraphic_is caligraphic_lower.•item caligraphic_1st caligraphic_itemIntellectReq caligraphic_predicts caligraphic_the caligraphic_MRS caligraphic_based caligraphic_on caligraphic_the caligraphic_uncertainty caligraphic_and caligraphic_the caligraphic_click caligraphic_sequences caligraphic_at caligraphic_the caligraphic_moment caligraphic_t caligraphic_and caligraphic_t-1.•item caligraphic_2nd caligraphic_itemDUET caligraphic_(Random) caligraphic_randomly caligraphic_selects caligraphic_edges caligraphic_to caligraphic_request caligraphic_the caligraphic_cloud caligraphic_model caligraphic_to caligraphic_update caligraphic_the caligraphic_parameters caligraphic_of caligraphic_the caligraphic_edges. caligraphic_At caligraphic_this caligraphic_point, caligraphic_MRS caligraphic_can caligraphic_be caligraphic_considered caligraphic_as caligraphic_an caligraphic_arbitrary caligraphic_constant. caligraphic_We caligraphic_take caligraphic_the caligraphic_average caligraphic_value caligraphic_of caligraphic_IntellectReq's caligraphic_MRS caligraphic_as caligraphic_the caligraphic_MRS caligraphic_value.•item caligraphic_3rd caligraphic_itemDUET caligraphic_(w. caligraphic_Request) caligraphic_represents caligraphic_all caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_the caligraphic_moment caligraphic_t.•item caligraphic_4th caligraphic_itemDUET caligraphic_(w/o. caligraphic_Request) caligraphic_represents caligraphic_no caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_moment caligraphic_t-1 caligraphic_in caligraphic_Figure caligraphic_and caligraphic_, caligraphic_represents caligraphic_no caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_moment caligraphic_0 caligraphic_in caligraphic_Figure caligraphic_.•item caligraphic_5th caligraphic_itemRequest caligraphic_Revenue caligraphic_represents caligraphic_the caligraphic_revenue, caligraphic_that caligraphic_is, caligraphic_DUET caligraphic_(w. caligraphic_Request) caligraphic_curve caligraphic_minus caligraphic_DUET caligraphic_(w/o caligraphic_Request).From caligraphic_Figure caligraphic_, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_observations: caligraphic_(1) caligraphic_The caligraphic_trends caligraphic_of caligraphic_MRS caligraphic_and caligraphic_DUET caligraphic_Revenue caligraphic_are caligraphic_typically caligraphic_in caligraphic_the caligraphic_opposite caligraphic_direction, caligraphic_which caligraphic_means caligraphic_that caligraphic_when caligraphic_the caligraphic_MRS caligraphic_value caligraphic_is caligraphic_low, caligraphic_IntellectReq caligraphic_tends caligraphic_to caligraphic_believe caligraphic_that caligraphic_the caligraphic_edge's caligraphic_model caligraphic_cannot caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution. caligraphic_Then, caligraphic_the caligraphic_IntellectReq caligraphic_uses caligraphic_the caligraphic_most caligraphic_recent caligraphic_real-time caligraphic_data caligraphic_to caligraphic_request caligraphic_model caligraphic_parameters. caligraphic_As caligraphic_a caligraphic_result, caligraphic_the caligraphic_revenue caligraphic_at caligraphic_this caligraphic_time caligraphic_is caligraphic_frequently caligraphic_positive caligraphic_and caligraphic_relatively caligraphic_high. caligraphic_When caligraphic_the caligraphic_MRS caligraphic_value caligraphic_is caligraphic_high, caligraphic_IntellectReq caligraphic_tends caligraphic_to caligraphic_continue caligraphic_using caligraphic_the caligraphic_model caligraphic_that caligraphic_was caligraphic_updated caligraphic_at caligraphic_the caligraphic_previous caligraphic_moment caligraphic_t-1 caligraphic_instead caligraphic_of caligraphic_t caligraphic_because caligraphic_it caligraphic_believes caligraphic_that caligraphic_the caligraphic_model caligraphic_on caligraphic_the caligraphic_edge caligraphic_can caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution. caligraphic_The caligraphic_revenue caligraphic_is caligraphic_frequently caligraphic_low caligraphic_and caligraphic_negative caligraphic_if caligraphic_the caligraphic_model caligraphic_parameters caligraphic_are caligraphic_requested caligraphic_at caligraphic_this caligraphic_point. caligraphic_(2) caligraphic_Since caligraphic_the caligraphic_MRS caligraphic_of caligraphic_DUET caligraphic_(Random) caligraphic_is caligraphic_constant, caligraphic_it caligraphic_cannot caligraphic_predict caligraphic_the caligraphic_revenue caligraphic_of caligraphic_each caligraphic_request. caligraphic_The caligraphic_performance caligraphic_curve caligraphic_changes caligraphic_randomly caligraphic_because caligraphic_of caligraphic_the caligraphic_irregular caligraphic_arrangement caligraphic_order caligraphic_of caligraphic_groups.4.2.3subsubsection caligraphic_4.2.34.2.3§4.2.34.2.3Ablation caligraphic_Study.4.2.3Ablation caligraphic_Study.Figure caligraphic_9Figure caligraphic_99Figure caligraphic_99Ablation caligraphic_study caligraphic_on caligraphic_model caligraphic_architecture.Figure caligraphic_9Ablation caligraphic_study caligraphic_on caligraphic_model caligraphic_architecture.We caligraphic_conducted caligraphic_an caligraphic_ablation caligraphic_study caligraphic_to caligraphic_show caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_different caligraphic_components caligraphic_in caligraphic_IntellectReq. caligraphic_The caligraphic_results caligraphic_are caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_. caligraphic_We caligraphic_use caligraphic_w/o. caligraphic_and caligraphic_w. caligraphic_to caligraphic_denote caligraphic_without caligraphic_and caligraphic_with, caligraphic_respectively. caligraphic_From caligraphic_the caligraphic_table, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_findings:•item caligraphic_1st caligraphic_itemIntellectReq caligraphic_means caligraphic_both caligraphic_DM caligraphic_and caligraphic_MRD caligraphic_are caligraphic_used.•item caligraphic_2nd caligraphic_item(w/o. caligraphic_DM) caligraphic_means caligraphic_MRD caligraphic_is caligraphic_used caligraphic_but caligraphic_DM caligraphic_is caligraphic_not caligraphic_used.•item caligraphic_3rd caligraphic_item(w/o. caligraphic_MRD) caligraphic_means caligraphic_DM caligraphic_is caligraphic_used caligraphic_but caligraphic_MRD caligraphic_is caligraphic_not caligraphic_used.From caligraphic_the caligraphic_figure caligraphic_and caligraphic_table, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_observations: caligraphic_(1) caligraphic_Generally, caligraphic_IntellectReq caligraphic_achieves caligraphic_the caligraphic_best caligraphic_performance caligraphic_with caligraphic_different caligraphic_evaluation caligraphic_metrics caligraphic_in caligraphic_most caligraphic_cases, caligraphic_demonstrating caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_IntellectReq. caligraphic_(2) caligraphic_When caligraphic_the caligraphic_request caligraphic_frequency caligraphic_is caligraphic_small, caligraphic_the caligraphic_difference caligraphic_between caligraphic_IntellectReq caligraphic_and caligraphic_IntellectReq caligraphic_(w/o. caligraphic_DM) caligraphic_is caligraphic_not caligraphic_immediately caligraphic_apparent, caligraphic_as caligraphic_shown caligraphic_in caligraphic_Fig. caligraphic_(d). caligraphic_The caligraphic_difference caligraphic_becomes caligraphic_more caligraphic_noticeable caligraphic_when caligraphic_the caligraphic_Request caligraphic_Frequency caligraphic_increases caligraphic_within caligraphic_a caligraphic_certain caligraphic_range. caligraphic_In caligraphic_brief, caligraphic_the caligraphic_difference caligraphic_exhibits caligraphic_the caligraphic_traits caligraphic_of caligraphic_first caligraphic_getting caligraphic_smaller, caligraphic_then caligraphic_larger, caligraphic_and caligraphic_finally caligraphic_smaller.4.2.4subsubsection caligraphic_4.2.44.2.4§4.2.44.2.4Time caligraphic_and caligraphic_Space caligraphic_Cost.4.2.4Time caligraphic_and caligraphic_Space caligraphic_Cost.Most caligraphic_edges caligraphic_have caligraphic_limited caligraphic_storage caligraphic_space, caligraphic_so caligraphic_the caligraphic_on-edge caligraphic_model caligraphic_must caligraphic_be caligraphic_small caligraphic_and caligraphic_sufficient. caligraphic_The caligraphic_edge's caligraphic_computing caligraphic_power caligraphic_is caligraphic_rather caligraphic_limited, caligraphic_and caligraphic_the caligraphic_completion caligraphic_of caligraphic_the caligraphic_recommendation caligraphic_task caligraphic_on caligraphic_the caligraphic_edge caligraphic_requires caligraphic_lots caligraphic_of caligraphic_real-time caligraphic_processing, caligraphic_so caligraphic_the caligraphic_model caligraphic_deployed caligraphic_on caligraphic_the caligraphic_edge caligraphic_must caligraphic_be caligraphic_both caligraphic_simple caligraphic_and caligraphic_fast. caligraphic_Therefore, caligraphic_we caligraphic_analyze caligraphic_whether caligraphic_these caligraphic_methods caligraphic_are caligraphic_controllable caligraphic_and caligraphic_highly caligraphic_profitable caligraphic_based caligraphic_on caligraphic_the caligraphic_DUET caligraphic_framework, caligraphic_and caligraphic_additional caligraphic_time caligraphic_and caligraphic_space caligraphic_resource caligraphic_consumption caligraphic_under caligraphic_this caligraphic_framework caligraphic_is caligraphic_shown caligraphic_in caligraphic_Table caligraphic_.Table caligraphic_1Table caligraphic_11Table caligraphic_11Extra caligraphic_Time caligraphic_and caligraphic_Space caligraphic_Cost caligraphic_on caligraphic_CDs caligraphic_dataset.Table caligraphic_1Extra caligraphic_Time caligraphic_and caligraphic_Space caligraphic_Cost caligraphic_on caligraphic_CDs caligraphic_dataset.MethodControllableProfitableTime caligraphic_CostSpace caligraphic_Cost caligraphic_(Param.)LOF✗✓225s/11.3ms≈0OC-SVM✗✓160s/9.7ms≈0Random✓✗0s/0.8ms≈0IntellectReq✓✓11s/7.9ms≈5.06kIn caligraphic_the caligraphic_time caligraphic_consumption caligraphic_column, caligraphic_signal caligraphic_``/'' caligraphic_separates caligraphic_the caligraphic_time caligraphic_consumption caligraphic_of caligraphic_cloud caligraphic_preprocessing caligraphic_and caligraphic_edge caligraphic_inference. caligraphic_Cloud caligraphic_preprocessing caligraphic_means caligraphic_that caligraphic_the caligraphic_cloud caligraphic_server caligraphic_first caligraphic_calculates caligraphic_the caligraphic_MRS caligraphic_value caligraphic_based caligraphic_on caligraphic_recent caligraphic_user caligraphic_data caligraphic_and caligraphic_then caligraphic_determines caligraphic_the caligraphic_threshold caligraphic_based caligraphic_on caligraphic_the caligraphic_communication caligraphic_budget caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server caligraphic_and caligraphic_sends caligraphic_it caligraphic_to caligraphic_the caligraphic_edge. caligraphic_Edge caligraphic_inference caligraphic_refers caligraphic_to caligraphic_the caligraphic_MRS caligraphic_calculated caligraphic_when caligraphic_the caligraphic_click caligraphic_sequence caligraphic_on caligraphic_the caligraphic_edge caligraphic_is caligraphic_updated. caligraphic_The caligraphic_experimental caligraphic_results caligraphic_show caligraphic_that: caligraphic_1) caligraphic_In caligraphic_terms caligraphic_of caligraphic_time caligraphic_consumption, caligraphic_both caligraphic_cloud caligraphic_preprocessing caligraphic_and caligraphic_edge caligraphic_inference caligraphic_are caligraphic_the caligraphic_fastest caligraphic_for caligraphic_random caligraphic_requests, caligraphic_followed caligraphic_by caligraphic_our caligraphic_IntellectReq. caligraphic_LOF caligraphic_and caligraphic_OC-SVM caligraphic_are caligraphic_the caligraphic_slowest. caligraphic_2) caligraphic_In caligraphic_terms caligraphic_of caligraphic_space caligraphic_consumption, caligraphic_random, caligraphic_LOF, caligraphic_and caligraphic_OC-SVM caligraphic_can caligraphic_all caligraphic_be caligraphic_regarded caligraphic_as caligraphic_requiring caligraphic_no caligraphic_additional caligraphic_space caligraphic_consumption. caligraphic_In caligraphic_contrast, caligraphic_our caligraphic_method caligraphic_requires caligraphic_the caligraphic_additional caligraphic_deployment caligraphic_of caligraphic_5.06k caligraphic_parameters caligraphic_on caligraphic_the caligraphic_edge. caligraphic_3) caligraphic_Random caligraphic_and caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_be caligraphic_realized caligraphic_in caligraphic_terms caligraphic_of caligraphic_controllability. caligraphic_It caligraphic_means caligraphic_that caligraphic_edge-cloud caligraphic_communication caligraphic_can caligraphic_be caligraphic_realized caligraphic_under caligraphic_the caligraphic_condition caligraphic_of caligraphic_an caligraphic_arbitrary caligraphic_communication caligraphic_budget, caligraphic_while caligraphic_LOF caligraphic_and caligraphic_OC-SVM caligraphic_cannot. caligraphic_4) caligraphic_In caligraphic_terms caligraphic_of caligraphic_high caligraphic_yield, caligraphic_LOF, caligraphic_OC-SVM, caligraphic_and caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_all caligraphic_be caligraphic_achieved, caligraphic_but caligraphic_random caligraphic_requests caligraphic_cannot. caligraphic_In caligraphic_general, caligraphic_our caligraphic_IntellectReq caligraphic_only caligraphic_requires caligraphic_minimal caligraphic_time caligraphic_consumption caligraphic_(does caligraphic_not caligraphic_affect caligraphic_real-time caligraphic_performance) caligraphic_and caligraphic_space caligraphic_consumption caligraphic_(easy caligraphic_to caligraphic_deploy caligraphic_for caligraphic_smart caligraphic_edges) caligraphic_and caligraphic_can caligraphic_take caligraphic_into caligraphic_account caligraphic_controllability caligraphic_and caligraphic_high caligraphic_profitability.5section caligraphic_55§55Conclusion5ConclusionIn caligraphic_our caligraphic_paper, caligraphic_we caligraphic_argue caligraphic_that caligraphic_under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_most caligraphic_communications caligraphic_requesting caligraphic_new caligraphic_parameters caligraphic_for caligraphic_the caligraphic_cloud-based caligraphic_recommendation caligraphic_system caligraphic_are caligraphic_unnecessary caligraphic_due caligraphic_to caligraphic_stable caligraphic_on-edge caligraphic_data caligraphic_distributions. caligraphic_We caligraphic_introduced caligraphic_IntellectReq, caligraphic_a caligraphic_low-resource caligraphic_solution caligraphic_for caligraphic_calculating caligraphic_request caligraphic_value caligraphic_and caligraphic_ensuring caligraphic_adaptive, caligraphic_high-revenue caligraphic_edge-cloud caligraphic_communication. caligraphic_IntellectReq caligraphic_employs caligraphic_a caligraphic_novel caligraphic_edge caligraphic_intelligence caligraphic_task caligraphic_to caligraphic_identify caligraphic_out-of-domain caligraphic_data caligraphic_and caligraphic_uses caligraphic_real-time caligraphic_user caligraphic_behavior caligraphic_mapping caligraphic_to caligraphic_a caligraphic_normal caligraphic_distribution, caligraphic_alongside caligraphic_multi-sampling caligraphic_outputs, caligraphic_to caligraphic_assess caligraphic_the caligraphic_edge caligraphic_model's caligraphic_adaptability caligraphic_to caligraphic_user caligraphic_actions. caligraphic_Our caligraphic_extensive caligraphic_tests caligraphic_across caligraphic_three caligraphic_public caligraphic_benchmarks caligraphic_confirm caligraphic_IntellectReq's caligraphic_efficiency caligraphic_and caligraphic_broad caligraphic_applicability, caligraphic_promoting caligraphic_a caligraphic_more caligraphic_effective caligraphic_edge-cloud caligraphic_collaborative caligraphic_recommendation caligraphic_approach.ACKNOWLEDGMENTThis caligraphic_work caligraphic_was caligraphic_supported caligraphic_by caligraphic_National caligraphic_Key caligraphic_R&D caligraphic_Program caligraphic_of caligraphic_China caligraphic_(No. caligraphic_2022ZD0119100), caligraphic_Scientific caligraphic_Research caligraphic_Fund caligraphic_of caligraphic_Zhejiang caligraphic_Provincial caligraphic_Education caligraphic_Department caligraphic_(Y202353679), caligraphic_National caligraphic_Natural caligraphic_Science caligraphic_Foundation caligraphic_of caligraphic_China caligraphic_(No. caligraphic_62376243, caligraphic_62037001, caligraphic_U20A20387), caligraphic_the caligraphic_StarryNight caligraphic_Science caligraphic_Fund caligraphic_of caligraphic_Zhejiang caligraphic_University caligraphic_Shanghai caligraphic_Institute caligraphic_for caligraphic_Advanced caligraphic_Study caligraphic_(SN-ZJU-SIAS-0010), caligraphic_Project caligraphic_by caligraphic_Shanghai caligraphic_AI caligraphic_Laboratory caligraphic_(P22KS00111) caligraphic_and caligraphic_Program caligraphic_of caligraphic_Zhejiang caligraphic_Province caligraphic_Science caligraphic_and caligraphic_Technology caligraphic_(2022C01044)References1(1) caligraphic_22000Breunig caligraphic_et caligraphic_al.Breunig, caligraphic_Kriegel, caligraphic_Ng, caligraphic_and caligraphic_SanderBreunig caligraphic_et caligraphic_al. caligraphic_(2000)ref:lof caligraphic_Markus caligraphic_M caligraphic_Breunig, caligraphic_Hans-Peter caligraphic_Kriegel, caligraphic_Raymond caligraphic_T caligraphic_Ng, caligraphic_and caligraphic_Jörg caligraphic_Sander. caligraphic_2000. caligraphic_LOF: caligraphic_identifying caligraphic_density-based caligraphic_local caligraphic_outliers. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_2000 caligraphic_ACM caligraphic_SIGMOD caligraphic_international caligraphic_conference caligraphic_on caligraphic_Management caligraphic_of caligraphic_data. caligraphic_93–104. caligraphic_32020Cai caligraphic_et caligraphic_al.Cai, caligraphic_Gan, caligraphic_Zhu, caligraphic_and caligraphic_HanCai caligraphic_et caligraphic_al. caligraphic_(2020)ref:finetuning caligraphic_Han caligraphic_Cai, caligraphic_Chuang caligraphic_Gan, caligraphic_Ligeng caligraphic_Zhu, caligraphic_and caligraphic_Song caligraphic_Han. caligraphic_2020. caligraphic_Tinytl: caligraphic_Reduce caligraphic_activations, caligraphic_not caligraphic_trainable caligraphic_parameters caligraphic_for caligraphic_efficient caligraphic_on-device caligraphic_learning. caligraphic_(2020). caligraphic_42023Cao caligraphic_et caligraphic_al.Cao, caligraphic_Zheng, caligraphic_Hassanzadeh, caligraphic_Lamba, caligraphic_Liu, caligraphic_and caligraphic_LiuCao caligraphic_et caligraphic_al. caligraphic_(2023)cao2023_10.1145/3604237.3626868 caligraphic_Defu caligraphic_Cao, caligraphic_Yixiang caligraphic_Zheng, caligraphic_Parisa caligraphic_Hassanzadeh, caligraphic_Simran caligraphic_Lamba, caligraphic_Xiaomo caligraphic_Liu, caligraphic_and caligraphic_Yan caligraphic_Liu. caligraphic_2023. caligraphic_Large caligraphic_Scale caligraphic_Financial caligraphic_Time caligraphic_Series caligraphic_Forecasting caligraphic_with caligraphic_Multi-faceted caligraphic_Model. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_Fourth caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_AI caligraphic_in caligraphic_Finance caligraphic_(<conf-loc>, caligraphic_<city>Brooklyn</city>, caligraphic_<state>NY</state>, caligraphic_<country>USA</country>, caligraphic_</conf-loc>) caligraphic_(ICAIF caligraphic_'23). caligraphic_Association caligraphic_for caligraphic_Computing caligraphic_Machinery, caligraphic_New caligraphic_York, caligraphic_NY, caligraphic_USA, caligraphic_472–480. caligraphic_https://doi.org/10.1145/3604237.3626868 caligraphic_52021Chang caligraphic_et caligraphic_al.Chang, caligraphic_Gao, caligraphic_Zheng, caligraphic_Hui, caligraphic_Niu, caligraphic_Song, caligraphic_Jin, caligraphic_and caligraphic_LiChang caligraphic_et caligraphic_al. caligraphic_(2021)ref:surge caligraphic_Jianxin caligraphic_Chang, caligraphic_Chen caligraphic_Gao, caligraphic_Yu caligraphic_Zheng, caligraphic_Yiqun caligraphic_Hui, caligraphic_Yanan caligraphic_Niu, caligraphic_Yang caligraphic_Song, caligraphic_Depeng caligraphic_Jin, caligraphic_and caligraphic_Yong caligraphic_Li. caligraphic_2021. caligraphic_Sequential caligraphic_recommendation caligraphic_with caligraphic_graph caligraphic_neural caligraphic_networks. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_44th caligraphic_International caligraphic_ACM caligraphic_SIGIR caligraphic_Conference caligraphic_on caligraphic_Research caligraphic_and caligraphic_Development caligraphic_in caligraphic_Information caligraphic_Retrieval. caligraphic_378–387. caligraphic_62021Chen caligraphic_and caligraphic_WangChen caligraphic_and caligraphic_WangChen caligraphic_and caligraphic_Wang caligraphic_(2021)chen2021multi caligraphic_Zhengyu caligraphic_Chen caligraphic_and caligraphic_Donglin caligraphic_Wang. caligraphic_2021. caligraphic_Multi-Initialization caligraphic_Meta-Learning caligraphic_with caligraphic_Domain caligraphic_Adaptation. caligraphic_In caligraphic_ICASSP caligraphic_2021-2021 caligraphic_IEEE caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Acoustics, caligraphic_Speech caligraphic_and caligraphic_Signal caligraphic_Processing caligraphic_(ICASSP). caligraphic_IEEE, caligraphic_1390–1394. caligraphic_72022Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_and caligraphic_KuangChen caligraphic_et caligraphic_al. caligraphic_(2022)chen2022ba caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2022. caligraphic_BA-GNN: caligraphic_On caligraphic_Learning caligraphic_Bias-Aware caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_In caligraphic_2022 caligraphic_IEEE caligraphic_38th caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Data caligraphic_Engineering caligraphic_(ICDE). caligraphic_IEEE, caligraphic_3012–3024. caligraphic_82023Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_Kuang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Yang, caligraphic_Lu, caligraphic_Yang, caligraphic_and caligraphic_WuChen caligraphic_et caligraphic_al. caligraphic_(2023)chen2023learning_arxiv caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_Kun caligraphic_Kuang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Min caligraphic_Zhang, caligraphic_Jinluan caligraphic_Yang, caligraphic_Chengqiang caligraphic_Lu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023. caligraphic_Learning caligraphic_to caligraphic_Reweight caligraphic_for caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2312.12475 caligraphic_(2023). caligraphic_92024Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_Kuang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Yang, caligraphic_Lu, caligraphic_Yang, caligraphic_and caligraphic_WuChen caligraphic_et caligraphic_al. caligraphic_(2024)chen2023learning caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_Kun caligraphic_Kuang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Min caligraphic_Zhang, caligraphic_Jinluan caligraphic_Yang, caligraphic_Chengqiang caligraphic_Lu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2024. caligraphic_Learning caligraphic_to caligraphic_Reweight caligraphic_for caligraphic_Generalizable caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence caligraphic_(2024). caligraphic_102021Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xu, caligraphic_and caligraphic_WangChen caligraphic_et caligraphic_al. caligraphic_(2021)chen2021deep caligraphic_Zhengyu caligraphic_Chen, caligraphic_Ziqing caligraphic_Xu, caligraphic_and caligraphic_Donglin caligraphic_Wang. caligraphic_2021. caligraphic_Deep caligraphic_transfer caligraphic_tensor caligraphic_decomposition caligraphic_with caligraphic_orthogonal caligraphic_constraint caligraphic_for caligraphic_recommender caligraphic_systems. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_Conference caligraphic_on caligraphic_Artificial caligraphic_Intelligence, caligraphic_Vol. caligraphic_35. caligraphic_4010–4018. caligraphic_112017Ha caligraphic_et caligraphic_al.Ha, caligraphic_Dai, caligraphic_and caligraphic_LeHa caligraphic_et caligraphic_al. caligraphic_(2017)ref:hypernetwork_pioneering1 caligraphic_David caligraphic_Ha, caligraphic_Andrew caligraphic_Dai, caligraphic_and caligraphic_Quoc caligraphic_V caligraphic_Le. caligraphic_2017. caligraphic_Hypernetworks. caligraphic_(2017). caligraphic_122016Hidasi caligraphic_et caligraphic_al.Hidasi, caligraphic_Karatzoglou, caligraphic_Baltrunas, caligraphic_and caligraphic_TikkHidasi caligraphic_et caligraphic_al. caligraphic_(2016)ref:gru4rec caligraphic_Balázs caligraphic_Hidasi, caligraphic_Alexandros caligraphic_Karatzoglou, caligraphic_Linas caligraphic_Baltrunas, caligraphic_and caligraphic_Domonkos caligraphic_Tikk. caligraphic_2016. caligraphic_Session-based caligraphic_recommendations caligraphic_with caligraphic_recurrent caligraphic_neural caligraphic_networks. caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations caligraphic_2016 caligraphic_(2016). caligraphic_132023Huang caligraphic_et caligraphic_al.Huang, caligraphic_Huang, caligraphic_Yang, caligraphic_Ren, caligraphic_Liu, caligraphic_Li, caligraphic_Ye, caligraphic_Liu, caligraphic_Yin, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2023)huang2023make caligraphic_Rongjie caligraphic_Huang, caligraphic_Jiawei caligraphic_Huang, caligraphic_Dongchao caligraphic_Yang, caligraphic_Yi caligraphic_Ren, caligraphic_Luping caligraphic_Liu, caligraphic_Mingze caligraphic_Li, caligraphic_Zhenhui caligraphic_Ye, caligraphic_Jinglin caligraphic_Liu, caligraphic_Xiang caligraphic_Yin, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2023. caligraphic_Make-an-audio: caligraphic_Text-to-audio caligraphic_generation caligraphic_with caligraphic_prompt-enhanced caligraphic_diffusion caligraphic_models. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2301.12661 caligraphic_(2023). caligraphic_142022aHuang caligraphic_et caligraphic_al.Huang, caligraphic_Lam, caligraphic_Wang, caligraphic_Su, caligraphic_Yu, caligraphic_Ren, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2022a)DBLP:conf/ijcai/HuangL0S00Z22 caligraphic_Rongjie caligraphic_Huang, caligraphic_Max caligraphic_W. caligraphic_Y. caligraphic_Lam, caligraphic_Jun caligraphic_Wang, caligraphic_Dan caligraphic_Su, caligraphic_Dong caligraphic_Yu, caligraphic_Yi caligraphic_Ren, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2022a. caligraphic_FastDiff: caligraphic_A caligraphic_Fast caligraphic_Conditional caligraphic_Diffusion caligraphic_Model caligraphic_for caligraphic_High-Quality caligraphic_Speech caligraphic_Synthesis. caligraphic_In caligraphic_IJCAI. caligraphic_ijcai.org, caligraphic_4157–4163. caligraphic_152022bHuang caligraphic_et caligraphic_al.Huang, caligraphic_Ren, caligraphic_Liu, caligraphic_Cui, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2022b)huang2022generspeech caligraphic_Rongjie caligraphic_Huang, caligraphic_Yi caligraphic_Ren, caligraphic_Jinglin caligraphic_Liu, caligraphic_Chenye caligraphic_Cui, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2022b. caligraphic_Generspeech: caligraphic_Towards caligraphic_style caligraphic_transfer caligraphic_for caligraphic_generalizable caligraphic_out-of-domain caligraphic_text-to-speech. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_35 caligraphic_(2022), caligraphic_10970–10983. caligraphic_162023aJi caligraphic_et caligraphic_al.Ji, caligraphic_Liang, caligraphic_Liao, caligraphic_Fei, caligraphic_and caligraphic_FengJi caligraphic_et caligraphic_al. caligraphic_(2023a)ji2023partial caligraphic_Wei caligraphic_Ji, caligraphic_Renjie caligraphic_Liang, caligraphic_Lizi caligraphic_Liao, caligraphic_Hao caligraphic_Fei, caligraphic_and caligraphic_Fuli caligraphic_Feng. caligraphic_2023a. caligraphic_Partial caligraphic_Annotation-based caligraphic_Video caligraphic_Moment caligraphic_Retrieval caligraphic_via caligraphic_Iterative caligraphic_Learning. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_Multimedia. caligraphic_172023bJi caligraphic_et caligraphic_al.Ji, caligraphic_Liu, caligraphic_Zhang, caligraphic_Wei, caligraphic_and caligraphic_WangJi caligraphic_et caligraphic_al. caligraphic_(2023b)ji2023online caligraphic_Wei caligraphic_Ji, caligraphic_Xiangyan caligraphic_Liu, caligraphic_An caligraphic_Zhang, caligraphic_Yinwei caligraphic_Wei, caligraphic_and caligraphic_Xiang caligraphic_Wang. caligraphic_2023b. caligraphic_Online caligraphic_Distillation-enhanced caligraphic_Multi-modal caligraphic_Transformer caligraphic_for caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_Multimedia. caligraphic_182018Kang caligraphic_and caligraphic_McAuleyKang caligraphic_and caligraphic_McAuleyKang caligraphic_and caligraphic_McAuley caligraphic_(2018)ref:sasrec caligraphic_Wang-Cheng caligraphic_Kang caligraphic_and caligraphic_Julian caligraphic_McAuley. caligraphic_2018. caligraphic_Self-attentive caligraphic_sequential caligraphic_recommendation. caligraphic_In caligraphic_2018 caligraphic_IEEE caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Data caligraphic_Mining caligraphic_(ICDM). caligraphic_IEEE, caligraphic_197–206. caligraphic_192021Latifi caligraphic_et caligraphic_al.Latifi, caligraphic_Mauro, caligraphic_and caligraphic_JannachLatifi caligraphic_et caligraphic_al. caligraphic_(2021)latifi2021session caligraphic_Sara caligraphic_Latifi, caligraphic_Noemi caligraphic_Mauro, caligraphic_and caligraphic_Dietmar caligraphic_Jannach. caligraphic_2021. caligraphic_Session-aware caligraphic_recommendation: caligraphic_A caligraphic_surprising caligraphic_quest caligraphic_for caligraphic_the caligraphic_state-of-the-art. caligraphic_Information caligraphic_Sciences caligraphic_573 caligraphic_(2021), caligraphic_291–315. caligraphic_202023eLi caligraphic_et caligraphic_al.Li, caligraphic_Xiao, caligraphic_Zheng, caligraphic_Wu, caligraphic_and caligraphic_CuiLi caligraphic_et caligraphic_al. caligraphic_(2023e)li2023propensity caligraphic_Haoxuan caligraphic_Li, caligraphic_Yanghao caligraphic_Xiao, caligraphic_Chunyuan caligraphic_Zheng, caligraphic_Peng caligraphic_Wu, caligraphic_and caligraphic_Peng caligraphic_Cui. caligraphic_2023e. caligraphic_Propensity caligraphic_matters: caligraphic_Measuring caligraphic_and caligraphic_enhancing caligraphic_balancing caligraphic_for caligraphic_recommendation. caligraphic_In caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Machine caligraphic_Learning. caligraphic_PMLR, caligraphic_20182–20194. caligraphic_212024Li caligraphic_et caligraphic_al.Li, caligraphic_Xiao, caligraphic_Zheng, caligraphic_Wu, caligraphic_Geng, caligraphic_Chen, caligraphic_and caligraphic_CuiLi caligraphic_et caligraphic_al. caligraphic_(2024)li2024kernel caligraphic_Haoxuan caligraphic_Li, caligraphic_Yanghao caligraphic_Xiao, caligraphic_Chunyuan caligraphic_Zheng, caligraphic_Peng caligraphic_Wu, caligraphic_Zhi caligraphic_Geng, caligraphic_Xu caligraphic_Chen, caligraphic_and caligraphic_Peng caligraphic_Cui. caligraphic_2024. caligraphic_Debiased caligraphic_Collaborative caligraphic_Filtering caligraphic_with caligraphic_Kernel-based caligraphic_Causal caligraphic_Balancing. caligraphic_In caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations. caligraphic_222022aLi caligraphic_et caligraphic_al.Li, caligraphic_He, caligraphic_Wei, caligraphic_Qian, caligraphic_Zhu, caligraphic_Xie, caligraphic_Zhuang, caligraphic_Tian, caligraphic_and caligraphic_TangLi caligraphic_et caligraphic_al. caligraphic_(2022a)li2022fine caligraphic_Juncheng caligraphic_Li, caligraphic_Xin caligraphic_He, caligraphic_Longhui caligraphic_Wei, caligraphic_Long caligraphic_Qian, caligraphic_Linchao caligraphic_Zhu, caligraphic_Lingxi caligraphic_Xie, caligraphic_Yueting caligraphic_Zhuang, caligraphic_Qi caligraphic_Tian, caligraphic_and caligraphic_Siliang caligraphic_Tang. caligraphic_2022a. caligraphic_Fine-grained caligraphic_semantically caligraphic_aligned caligraphic_vision-language caligraphic_pre-training. caligraphic_Advances caligraphic_in caligraphic_neural caligraphic_information caligraphic_processing caligraphic_systems caligraphic_35 caligraphic_(2022), caligraphic_7290–7303. caligraphic_232023aLi caligraphic_et caligraphic_al.Li, caligraphic_Pan, caligraphic_Ge, caligraphic_Gao, caligraphic_Zhang, caligraphic_Ji, caligraphic_Zhang, caligraphic_Chua, caligraphic_Tang, caligraphic_and caligraphic_ZhuangLi caligraphic_et caligraphic_al. caligraphic_(2023a)li2023finetuning caligraphic_Juncheng caligraphic_Li, caligraphic_Kaihang caligraphic_Pan, caligraphic_Zhiqi caligraphic_Ge, caligraphic_Minghe caligraphic_Gao, caligraphic_Hanwang caligraphic_Zhang, caligraphic_Wei caligraphic_Ji, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_Siliang caligraphic_Tang, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2023a. caligraphic_Fine-tuning caligraphic_Multimodal caligraphic_LLMs caligraphic_to caligraphic_Follow caligraphic_Zero-shot caligraphic_Demonstrative caligraphic_Instructions. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2308.04152 caligraphic_(2023). caligraphic_242023bLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Qin, caligraphic_Ji, caligraphic_and caligraphic_LiangLi caligraphic_et caligraphic_al. caligraphic_(2023b)lili_10.1145/3581783.3611847 caligraphic_Li caligraphic_Li, caligraphic_Chenwei caligraphic_Wang, caligraphic_You caligraphic_Qin, caligraphic_Wei caligraphic_Ji, caligraphic_and caligraphic_Renjie caligraphic_Liang. caligraphic_2023b. caligraphic_Biased-Predicate caligraphic_Annotation caligraphic_Identification caligraphic_via caligraphic_Unbiased caligraphic_Visual caligraphic_Predicate caligraphic_Representation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31st caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia caligraphic_(<conf-loc>, caligraphic_<city>Ottawa caligraphic_ON</city>, caligraphic_<country>Canada</country>, caligraphic_</conf-loc>) caligraphic_(MM caligraphic_'23). caligraphic_Association caligraphic_for caligraphic_Computing caligraphic_Machinery, caligraphic_New caligraphic_York, caligraphic_NY, caligraphic_USA, caligraphic_4410–4420. caligraphic_https://doi.org/10.1145/3581783.3611847 caligraphic_252023dLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Zhang, caligraphic_Miao, caligraphic_Zhao, caligraphic_Zhang, caligraphic_Ji, caligraphic_and caligraphic_WuLi caligraphic_et caligraphic_al. caligraphic_(2023d)li2023winner caligraphic_Mengze caligraphic_Li, caligraphic_Han caligraphic_Wang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Zhou caligraphic_Zhao, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Wei caligraphic_Ji, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023d. caligraphic_Winner: caligraphic_Weakly-supervised caligraphic_hierarchical caligraphic_decomposition caligraphic_and caligraphic_alignment caligraphic_for caligraphic_spatio-temporal caligraphic_video caligraphic_grounding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_23090–23099. caligraphic_262023cLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Xu, caligraphic_Han, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Miao, caligraphic_Zhang, caligraphic_Pu, caligraphic_and caligraphic_WuLi caligraphic_et caligraphic_al. caligraphic_(2023c)li2023multi caligraphic_Mengze caligraphic_Li, caligraphic_Tianbao caligraphic_Wang, caligraphic_Jiahe caligraphic_Xu, caligraphic_Kairong caligraphic_Han, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Shiliang caligraphic_Pu, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023c. caligraphic_Multi-modal caligraphic_Action caligraphic_Chain caligraphic_Abductive caligraphic_Reasoning. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_61st caligraphic_Annual caligraphic_Meeting caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics caligraphic_(Volume caligraphic_1: caligraphic_Long caligraphic_Papers). caligraphic_4617–4628. caligraphic_272022bLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Miao, caligraphic_Zhang, caligraphic_Tan, caligraphic_Wang, caligraphic_Wang, caligraphic_et caligraphic_al.Li caligraphic_et caligraphic_al. caligraphic_(2022b)li2022end caligraphic_Mengze caligraphic_Li, caligraphic_Tianbao caligraphic_Wang, caligraphic_Haoyu caligraphic_Zhang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Wenming caligraphic_Tan, caligraphic_Jin caligraphic_Wang, caligraphic_Peng caligraphic_Wang, caligraphic_et caligraphic_al. caligraphic_2022b. caligraphic_End-to-End caligraphic_Modeling caligraphic_via caligraphic_Information caligraphic_Tree caligraphic_for caligraphic_One-Shot caligraphic_Natural caligraphic_Language caligraphic_Spatial caligraphic_Video caligraphic_Grounding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_60th caligraphic_Annual caligraphic_Meeting caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics caligraphic_(Volume caligraphic_1: caligraphic_Long caligraphic_Papers). caligraphic_8707–8717. caligraphic_282023Lin caligraphic_et caligraphic_al.Lin, caligraphic_Xu, caligraphic_Wang, caligraphic_Zhang, caligraphic_and caligraphic_FengLin caligraphic_et caligraphic_al. caligraphic_(2023)lin2023mitigating caligraphic_Xin-Yu caligraphic_Lin, caligraphic_Yi-Yan caligraphic_Xu, caligraphic_Wen-Jie caligraphic_Wang, caligraphic_Yang caligraphic_Zhang, caligraphic_and caligraphic_Fu-Li caligraphic_Feng. caligraphic_2023. caligraphic_Mitigating caligraphic_Spurious caligraphic_Correlations caligraphic_for caligraphic_Self-supervised caligraphic_Recommendation. caligraphic_Machine caligraphic_Intelligence caligraphic_Research caligraphic_20, caligraphic_2 caligraphic_(2023), caligraphic_263–275. caligraphic_292022Lv caligraphic_et caligraphic_al.Lv, caligraphic_Wang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_Yang, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2022)lv2022personalizing caligraphic_Zheqi caligraphic_Lv, caligraphic_Feng caligraphic_Wang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2022. caligraphic_Personalizing caligraphic_Intervened caligraphic_Network caligraphic_for caligraphic_Long-tailed caligraphic_Sequential caligraphic_User caligraphic_Behavior caligraphic_Modeling. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2208.09130 caligraphic_(2022). caligraphic_302023aLv caligraphic_et caligraphic_al.Lv, caligraphic_Wang, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2023a)lv2023parameters caligraphic_Zheqi caligraphic_Lv, caligraphic_Feng caligraphic_Wang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023a. caligraphic_Parameters caligraphic_Efficient caligraphic_Fine-Tuning caligraphic_for caligraphic_Long-Tailed caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_CAAI caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Artificial caligraphic_Intelligence. caligraphic_Springer, caligraphic_442–459. caligraphic_312023bLv caligraphic_et caligraphic_al.Lv, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_Wang, caligraphic_Wang, caligraphic_Chen, caligraphic_Shen, caligraphic_Yang, caligraphic_Ooi, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2023b)ref:duet caligraphic_Zheqi caligraphic_Lv, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Feng caligraphic_Wang, caligraphic_Yongwei caligraphic_Wang, caligraphic_Zhengyu caligraphic_Chen, caligraphic_Tao caligraphic_Shen, caligraphic_Hongxia caligraphic_Yang, caligraphic_Beng caligraphic_Chin caligraphic_Ooi, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023b. caligraphic_DUET: caligraphic_A caligraphic_Tuning-Free caligraphic_Device-Cloud caligraphic_Collaborative caligraphic_Parameters caligraphic_Generation caligraphic_Framework caligraphic_for caligraphic_Efficient caligraphic_Device caligraphic_Model caligraphic_Generalization. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_ACM caligraphic_Web caligraphic_Conference caligraphic_2023. caligraphic_322021Marfoq caligraphic_et caligraphic_al.Marfoq, caligraphic_Neglia, caligraphic_Bellet, caligraphic_Kameni, caligraphic_and caligraphic_VidalMarfoq caligraphic_et caligraphic_al. caligraphic_(2021)ref:federated_multi_task2 caligraphic_Othmane caligraphic_Marfoq, caligraphic_Giovanni caligraphic_Neglia, caligraphic_Aurélien caligraphic_Bellet, caligraphic_Laetitia caligraphic_Kameni, caligraphic_and caligraphic_Richard caligraphic_Vidal. caligraphic_2021. caligraphic_Federated caligraphic_multi-task caligraphic_learning caligraphic_under caligraphic_a caligraphic_mixture caligraphic_of caligraphic_distributions. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_34 caligraphic_(2021), caligraphic_15434–15447. caligraphic_332017McMahan caligraphic_et caligraphic_al.McMahan, caligraphic_Moore, caligraphic_Ramage, caligraphic_Hampson, caligraphic_and caligraphic_y caligraphic_ArcasMcMahan caligraphic_et caligraphic_al. caligraphic_(2017)ref:federated_fedavg caligraphic_Brendan caligraphic_McMahan, caligraphic_Eider caligraphic_Moore, caligraphic_Daniel caligraphic_Ramage, caligraphic_Seth caligraphic_Hampson, caligraphic_and caligraphic_Blaise caligraphic_Aguera caligraphic_y caligraphic_Arcas. caligraphic_2017. caligraphic_Communication-efficient caligraphic_learning caligraphic_of caligraphic_deep caligraphic_networks caligraphic_from caligraphic_decentralized caligraphic_data. caligraphic_In caligraphic_Artificial caligraphic_intelligence caligraphic_and caligraphic_statistics. caligraphic_PMLR, caligraphic_1273–1282. caligraphic_342021Mills caligraphic_et caligraphic_al.Mills, caligraphic_Hu, caligraphic_and caligraphic_MinMills caligraphic_et caligraphic_al. caligraphic_(2021)ref:federated_multi_task caligraphic_Jed caligraphic_Mills, caligraphic_Jia caligraphic_Hu, caligraphic_and caligraphic_Geyong caligraphic_Min. caligraphic_2021. caligraphic_Multi-task caligraphic_federated caligraphic_learning caligraphic_for caligraphic_personalised caligraphic_deep caligraphic_neural caligraphic_networks caligraphic_in caligraphic_edge caligraphic_computing. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Parallel caligraphic_and caligraphic_Distributed caligraphic_Systems caligraphic_33, caligraphic_3 caligraphic_(2021), caligraphic_630–641. caligraphic_352022Qian caligraphic_et caligraphic_al.Qian, caligraphic_Xu, caligraphic_Lv, caligraphic_Zhang, caligraphic_Jiang, caligraphic_Liu, caligraphic_Zeng, caligraphic_Chua, caligraphic_and caligraphic_WuQian caligraphic_et caligraphic_al. caligraphic_(2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022 caligraphic_Xufeng caligraphic_Qian, caligraphic_Yue caligraphic_Xu, caligraphic_Fuyu caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Ziwen caligraphic_Jiang, caligraphic_Qingwen caligraphic_Liu, caligraphic_Xiaoyi caligraphic_Zeng, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2022. caligraphic_Intelligent caligraphic_Request caligraphic_Strategy caligraphic_Design caligraphic_in caligraphic_Recommender caligraphic_System. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining. caligraphic_ACM, caligraphic_3772–3782. caligraphic_362020Qin caligraphic_et caligraphic_al.Qin, caligraphic_Lv, caligraphic_Wang, caligraphic_Hu, caligraphic_and caligraphic_WuQin caligraphic_et caligraphic_al. caligraphic_(2020)qin2020health caligraphic_Fang-Yu caligraphic_Qin, caligraphic_Zhe-Qi caligraphic_Lv, caligraphic_Dan-Ni caligraphic_Wang, caligraphic_Bo caligraphic_Hu, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2020. caligraphic_Health caligraphic_status caligraphic_prediction caligraphic_for caligraphic_the caligraphic_elderly caligraphic_based caligraphic_on caligraphic_machine caligraphic_learning. caligraphic_Archives caligraphic_of caligraphic_gerontology caligraphic_and caligraphic_geriatrics caligraphic_90 caligraphic_(2020), caligraphic_104121. caligraphic_372010Rendle caligraphic_et caligraphic_al.Rendle, caligraphic_Freudenthaler, caligraphic_and caligraphic_Schmidt-ThiemeRendle caligraphic_et caligraphic_al. caligraphic_(2010)ref:fpmc caligraphic_Steffen caligraphic_Rendle, caligraphic_Christoph caligraphic_Freudenthaler, caligraphic_and caligraphic_Lars caligraphic_Schmidt-Thieme. caligraphic_2010. caligraphic_Factorizing caligraphic_personalized caligraphic_Markov caligraphic_chains caligraphic_for caligraphic_next-basket caligraphic_recommendation. caligraphic_the caligraphic_web caligraphic_conference caligraphic_(2010). caligraphic_382019Sanh caligraphic_et caligraphic_al.Sanh, caligraphic_Debut, caligraphic_Chaumond, caligraphic_and caligraphic_WolfSanh caligraphic_et caligraphic_al. caligraphic_(2019)ref:disitll caligraphic_Victor caligraphic_Sanh, caligraphic_Lysandre caligraphic_Debut, caligraphic_Julien caligraphic_Chaumond, caligraphic_and caligraphic_Thomas caligraphic_Wolf. caligraphic_2019. caligraphic_DistilBERT, caligraphic_a caligraphic_distilled caligraphic_version caligraphic_of caligraphic_BERT: caligraphic_smaller, caligraphic_faster, caligraphic_cheaper caligraphic_and caligraphic_lighter. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:1910.01108 caligraphic_(2019). caligraphic_392023aSu caligraphic_et caligraphic_al.Su, caligraphic_Chen, caligraphic_Lin, caligraphic_Li, caligraphic_Liu, caligraphic_and caligraphic_ZhengSu caligraphic_et caligraphic_al. caligraphic_(2023a)su2023personalized caligraphic_Jiajie caligraphic_Su, caligraphic_Chaochao caligraphic_Chen, caligraphic_Zibin caligraphic_Lin, caligraphic_Xi caligraphic_Li, caligraphic_Weiming caligraphic_Liu, caligraphic_and caligraphic_Xiaolin caligraphic_Zheng. caligraphic_2023a. caligraphic_Personalized caligraphic_Behavior-Aware caligraphic_Transformer caligraphic_for caligraphic_Multi-Behavior caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31st caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia. caligraphic_6321–6331. caligraphic_402023bSu caligraphic_et caligraphic_al.Su, caligraphic_Chen, caligraphic_Liu, caligraphic_Wu, caligraphic_Zheng, caligraphic_and caligraphic_LyuSu caligraphic_et caligraphic_al. caligraphic_(2023b)su2023enhancing caligraphic_Jiajie caligraphic_Su, caligraphic_Chaochao caligraphic_Chen, caligraphic_Weiming caligraphic_Liu, caligraphic_Fei caligraphic_Wu, caligraphic_Xiaolin caligraphic_Zheng, caligraphic_and caligraphic_Haoming caligraphic_Lyu. caligraphic_2023b. caligraphic_Enhancing caligraphic_Hierarchy-Aware caligraphic_Graph caligraphic_Networks caligraphic_with caligraphic_Deep caligraphic_Dual caligraphic_Clustering caligraphic_for caligraphic_Session-based caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_ACM caligraphic_Web caligraphic_Conference caligraphic_2023. caligraphic_165–176. caligraphic_412019Sun caligraphic_et caligraphic_al.Sun, caligraphic_Liu, caligraphic_Wu, caligraphic_Pei, caligraphic_Lin, caligraphic_Ou, caligraphic_and caligraphic_JiangSun caligraphic_et caligraphic_al. caligraphic_(2019)ref:bert4rec caligraphic_Fei caligraphic_Sun, caligraphic_Jun caligraphic_Liu, caligraphic_Jian caligraphic_Wu, caligraphic_Changhua caligraphic_Pei, caligraphic_Xiao caligraphic_Lin, caligraphic_Wenwu caligraphic_Ou, caligraphic_and caligraphic_Peng caligraphic_Jiang. caligraphic_2019. caligraphic_BERT4Rec: caligraphic_Sequential caligraphic_recommendation caligraphic_with caligraphic_bidirectional caligraphic_encoder caligraphic_representations caligraphic_from caligraphic_transformer. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_28th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_information caligraphic_and caligraphic_knowledge caligraphic_management. caligraphic_1441–1450. caligraphic_422024aTang caligraphic_et caligraphic_al.Tang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Wu, caligraphic_and caligraphic_KuangTang caligraphic_et caligraphic_al. caligraphic_(2024a)tang2024modelgpt caligraphic_Zihao caligraphic_Tang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2024a. caligraphic_ModelGPT: caligraphic_Unleashing caligraphic_LLM's caligraphic_Capabilities caligraphic_for caligraphic_Tailored caligraphic_Model caligraphic_Generation. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2402.12408 caligraphic_(2024). caligraphic_432024bTang caligraphic_et caligraphic_al.Tang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Zhou, caligraphic_Duan, caligraphic_Kuang, caligraphic_and caligraphic_WuTang caligraphic_et caligraphic_al. caligraphic_(2024b)tang2024oodkd caligraphic_Zihao caligraphic_Tang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Yifan caligraphic_Zhou, caligraphic_Xinyu caligraphic_Duan, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2024b. caligraphic_AuG-KD: caligraphic_Anchor-Based caligraphic_Mixup caligraphic_Generation caligraphic_for caligraphic_Out-of-Domain caligraphic_Knowledge caligraphic_Distillation. caligraphic_In caligraphic_12th caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations, caligraphic_ICLR caligraphic_2024, caligraphic_Vienna caligraphic_Austria, caligraphic_May caligraphic_7-11, caligraphic_2024. caligraphic_OpenReview.net. caligraphic_https://openreview.net/forum?id=fcqWJ8JgMR caligraphic_442002TaxTaxTax caligraphic_(2002)ref:ocsvm caligraphic_David caligraphic_Martinus caligraphic_Johannes caligraphic_Tax. caligraphic_2002. caligraphic_One-class caligraphic_classification: caligraphic_Concept caligraphic_learning caligraphic_in caligraphic_the caligraphic_absence caligraphic_of caligraphic_counter-examples. caligraphic_(2002). caligraphic_452023Tong caligraphic_et caligraphic_al.Tong, caligraphic_Yuan, caligraphic_Zhang, caligraphic_Zhu, caligraphic_Zhang, caligraphic_Wu, caligraphic_and caligraphic_KuangTong caligraphic_et caligraphic_al. caligraphic_(2023)DBLP:conf/kdd/TongYZZZWK23 caligraphic_Yunze caligraphic_Tong, caligraphic_Junkun caligraphic_Yuan, caligraphic_Min caligraphic_Zhang, caligraphic_Didi caligraphic_Zhu, caligraphic_Keli caligraphic_Zhang, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023. caligraphic_Quantitatively caligraphic_Measuring caligraphic_and caligraphic_Contrastively caligraphic_Exploring caligraphic_Heterogeneity caligraphic_for caligraphic_Domain caligraphic_Generalization. caligraphic_In caligraphic_KDD. caligraphic_ACM, caligraphic_2189–2200. caligraphic_462017Wang caligraphic_et caligraphic_al.Wang, caligraphic_Cui, caligraphic_Wang, caligraphic_Pei, caligraphic_Zhu, caligraphic_and caligraphic_YangWang caligraphic_et caligraphic_al. caligraphic_(2017)wang2017community caligraphic_Xiao caligraphic_Wang, caligraphic_Peng caligraphic_Cui, caligraphic_Jing caligraphic_Wang, caligraphic_Jian caligraphic_Pei, caligraphic_Wenwu caligraphic_Zhu, caligraphic_and caligraphic_Shiqiang caligraphic_Yang. caligraphic_2017. caligraphic_Community caligraphic_preserving caligraphic_network caligraphic_embedding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence, caligraphic_Vol. caligraphic_31. caligraphic_472019Wu caligraphic_et caligraphic_al.Wu, caligraphic_Tang, caligraphic_Zhu, caligraphic_Wang, caligraphic_Xie, caligraphic_and caligraphic_TanWu caligraphic_et caligraphic_al. caligraphic_(2019)ref:srgnn caligraphic_Shu caligraphic_Wu, caligraphic_Yuyuan caligraphic_Tang, caligraphic_Yanqiao caligraphic_Zhu, caligraphic_Liang caligraphic_Wang, caligraphic_Xing caligraphic_Xie, caligraphic_and caligraphic_Tieniu caligraphic_Tan. caligraphic_2019. caligraphic_Session-based caligraphic_recommendation caligraphic_with caligraphic_graph caligraphic_neural caligraphic_networks. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence, caligraphic_Vol. caligraphic_33. caligraphic_346–353. caligraphic_482023aWu caligraphic_et caligraphic_al.Wu, caligraphic_Lu, caligraphic_Zhang, caligraphic_Jatowt, caligraphic_Feng, caligraphic_Sun, caligraphic_Wu, caligraphic_and caligraphic_KuangWu caligraphic_et caligraphic_al. caligraphic_(2023a)wu2023focus caligraphic_Yiquan caligraphic_Wu, caligraphic_Weiming caligraphic_Lu, caligraphic_Yating caligraphic_Zhang, caligraphic_Adam caligraphic_Jatowt, caligraphic_Jun caligraphic_Feng, caligraphic_Changlong caligraphic_Sun, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023a. caligraphic_Focus-aware caligraphic_response caligraphic_generation caligraphic_in caligraphic_inquiry caligraphic_conversation. caligraphic_In caligraphic_Findings caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics: caligraphic_ACL caligraphic_2023. caligraphic_12585–12599. caligraphic_492023bWu caligraphic_et caligraphic_al.Wu, caligraphic_Zhou, caligraphic_Liu, caligraphic_Lu, caligraphic_Liu, caligraphic_Zhang, caligraphic_Sun, caligraphic_Wu, caligraphic_and caligraphic_KuangWu caligraphic_et caligraphic_al. caligraphic_(2023b)wu2023precedent caligraphic_Yiquan caligraphic_Wu, caligraphic_Siying caligraphic_Zhou, caligraphic_Yifei caligraphic_Liu, caligraphic_Weiming caligraphic_Lu, caligraphic_Xiaozhong caligraphic_Liu, caligraphic_Yating caligraphic_Zhang, caligraphic_Changlong caligraphic_Sun, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023b. caligraphic_Precedent-Enhanced caligraphic_Legal caligraphic_Judgment caligraphic_Prediction caligraphic_with caligraphic_LLM caligraphic_and caligraphic_Domain-Model caligraphic_Collaboration. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2310.09241 caligraphic_(2023). caligraphic_502024Xinyu caligraphic_Lin caligraphic_and caligraphic_ChuaXinyu caligraphic_Lin caligraphic_and caligraphic_ChuaXinyu caligraphic_Lin caligraphic_and caligraphic_Chua caligraphic_(2024)lin2023temporally caligraphic_Jujia caligraphic_Zhao caligraphic_Yongqi caligraphic_Li caligraphic_Fuli caligraphic_Feng caligraphic_Xinyu caligraphic_Lin, caligraphic_Wenjie caligraphic_Wang caligraphic_and caligraphic_Tat-Seng caligraphic_Chua. caligraphic_2024. caligraphic_Temporally caligraphic_and caligraphic_Distributionally caligraphic_Robust caligraphic_Optimization caligraphic_for caligraphic_Cold-start caligraphic_Recommendation. caligraphic_In caligraphic_AAAI. caligraphic_512022bYan caligraphic_et caligraphic_al.Yan, caligraphic_Wang, caligraphic_Zhang, caligraphic_Li, caligraphic_Xu, caligraphic_and caligraphic_ZhengYan caligraphic_et caligraphic_al. caligraphic_(2022b)ref:apg_rs1 caligraphic_Bencheng caligraphic_Yan, caligraphic_Pengjie caligraphic_Wang, caligraphic_Kai caligraphic_Zhang, caligraphic_Feng caligraphic_Li, caligraphic_Jian caligraphic_Xu, caligraphic_and caligraphic_Bo caligraphic_Zheng. caligraphic_2022b. caligraphic_APG: caligraphic_Adaptive caligraphic_Parameter caligraphic_Generation caligraphic_Network caligraphic_for caligraphic_Click-Through caligraphic_Rate caligraphic_Prediction. caligraphic_In caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems. caligraphic_522022aYan caligraphic_et caligraphic_al.Yan, caligraphic_Niu, caligraphic_Gu, caligraphic_Wu, caligraphic_Tang, caligraphic_Hua, caligraphic_Lyu, caligraphic_and caligraphic_ChenYan caligraphic_et caligraphic_al. caligraphic_(2022a)ref:edge_cloud2 caligraphic_Yikai caligraphic_Yan, caligraphic_Chaoyue caligraphic_Niu, caligraphic_Renjie caligraphic_Gu, caligraphic_Fan caligraphic_Wu, caligraphic_Shaojie caligraphic_Tang, caligraphic_Lifeng caligraphic_Hua, caligraphic_Chengfei caligraphic_Lyu, caligraphic_and caligraphic_Guihai caligraphic_Chen. caligraphic_2022a. caligraphic_On-Device caligraphic_Learning caligraphic_for caligraphic_Model caligraphic_Personalization caligraphic_with caligraphic_Large-Scale caligraphic_Cloud-Coordinated caligraphic_Domain caligraphic_Adaption. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining, caligraphic_Washington, caligraphic_DC, caligraphic_USA, caligraphic_August caligraphic_14 caligraphic_- caligraphic_18, caligraphic_2022. caligraphic_2180–2190. caligraphic_532022aYao caligraphic_et caligraphic_al.Yao, caligraphic_Wang, caligraphic_Ding, caligraphic_Chen, caligraphic_Han, caligraphic_Zhou, caligraphic_and caligraphic_YangYao caligraphic_et caligraphic_al. caligraphic_(2022a)ref:edge_cloud caligraphic_Jiangchao caligraphic_Yao, caligraphic_Feng caligraphic_Wang, caligraphic_Xichen caligraphic_Ding, caligraphic_Shaohu caligraphic_Chen, caligraphic_Bo caligraphic_Han, caligraphic_Jingren caligraphic_Zhou, caligraphic_and caligraphic_Hongxia caligraphic_Yang. caligraphic_2022a. caligraphic_Device-cloud caligraphic_Collaborative caligraphic_Recommendation caligraphic_via caligraphic_Meta caligraphic_Controller. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining, caligraphic_Washington, caligraphic_DC, caligraphic_USA, caligraphic_August caligraphic_14 caligraphic_- caligraphic_18, caligraphic_2022. caligraphic_4353–4362. caligraphic_542022bYao caligraphic_et caligraphic_al.Yao, caligraphic_Zhang, caligraphic_Yao, caligraphic_Wang, caligraphic_Ma, caligraphic_Zhang, caligraphic_Chu, caligraphic_Ji, caligraphic_Jia, caligraphic_Shen, caligraphic_et caligraphic_al.Yao caligraphic_et caligraphic_al. caligraphic_(2022b)ref:edge_cloud_survey caligraphic_Jiangchao caligraphic_Yao, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Yang caligraphic_Yao, caligraphic_Feng caligraphic_Wang, caligraphic_Jianxin caligraphic_Ma, caligraphic_Jianwei caligraphic_Zhang, caligraphic_Yunfei caligraphic_Chu, caligraphic_Luo caligraphic_Ji, caligraphic_Kunyang caligraphic_Jia, caligraphic_Tao caligraphic_Shen, caligraphic_et caligraphic_al. caligraphic_2022b. caligraphic_Edge-Cloud caligraphic_Polarization caligraphic_and caligraphic_Collaboration: caligraphic_A caligraphic_Comprehensive caligraphic_Survey caligraphic_for caligraphic_AI. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Knowledge caligraphic_and caligraphic_Data caligraphic_Engineering caligraphic_(2022). caligraphic_552022aZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Kuang, caligraphic_Chen, caligraphic_Liu, caligraphic_Wu, caligraphic_and caligraphic_XiaoZhang caligraphic_et caligraphic_al. caligraphic_(2022a)zhang2022fairness caligraphic_Fengda caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Long caligraphic_Chen, caligraphic_Yuxuan caligraphic_Liu, caligraphic_Chao caligraphic_Wu, caligraphic_and caligraphic_Jun caligraphic_Xiao. caligraphic_2022a. caligraphic_Fairness-aware caligraphic_contrastive caligraphic_learning caligraphic_with caligraphic_partially caligraphic_annotated caligraphic_sensitive caligraphic_attributes. caligraphic_In caligraphic_The caligraphic_Eleventh caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations. caligraphic_562023bZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Kuang, caligraphic_Chen, caligraphic_You, caligraphic_Shen, caligraphic_Xiao, caligraphic_Zhang, caligraphic_Wu, caligraphic_Wu, caligraphic_Zhuang, caligraphic_et caligraphic_al.Zhang caligraphic_et caligraphic_al. caligraphic_(2023b)zhang2023federated caligraphic_Fengda caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Long caligraphic_Chen, caligraphic_Zhaoyang caligraphic_You, caligraphic_Tao caligraphic_Shen, caligraphic_Jun caligraphic_Xiao, caligraphic_Yin caligraphic_Zhang, caligraphic_Chao caligraphic_Wu, caligraphic_Fei caligraphic_Wu, caligraphic_Yueting caligraphic_Zhuang, caligraphic_et caligraphic_al. caligraphic_2023b. caligraphic_Federated caligraphic_unsupervised caligraphic_representation caligraphic_learning. caligraphic_Frontiers caligraphic_of caligraphic_Information caligraphic_Technology caligraphic_& caligraphic_Electronic caligraphic_Engineering caligraphic_24, caligraphic_8 caligraphic_(2023), caligraphic_1181–1193. caligraphic_572023aZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Feng, caligraphic_Kuang, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Yang, caligraphic_Chua, caligraphic_and caligraphic_WuZhang caligraphic_et caligraphic_al. caligraphic_(2023a)zhangsy2023personalized caligraphic_Shengyu caligraphic_Zhang, caligraphic_Fuli caligraphic_Feng, caligraphic_Kun caligraphic_Kuang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Hongxia caligraphic_Yang, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023a. caligraphic_Personalized caligraphic_Latent caligraphic_Structure caligraphic_Learning caligraphic_for caligraphic_Recommendation. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Pattern caligraphic_Analysis caligraphic_and caligraphic_Machine caligraphic_Intelligence caligraphic_(2023). caligraphic_582020Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Jiang, caligraphic_Wang, caligraphic_Kuang, caligraphic_Zhao, caligraphic_Zhu, caligraphic_Yu, caligraphic_Yang, caligraphic_and caligraphic_WuZhang caligraphic_et caligraphic_al. caligraphic_(2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20 caligraphic_Shengyu caligraphic_Zhang, caligraphic_Tan caligraphic_Jiang, caligraphic_Tan caligraphic_Wang, caligraphic_Kun caligraphic_Kuang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jianke caligraphic_Zhu, caligraphic_Jin caligraphic_Yu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2020. caligraphic_DeVLBert: caligraphic_Learning caligraphic_Deconfounded caligraphic_Visio-Linguistic caligraphic_Representations. caligraphic_In caligraphic_MM caligraphic_'20: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia. caligraphic_ACM, caligraphic_4373–4382. caligraphic_592023cZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Liu, caligraphic_Zeng, caligraphic_Ooi, caligraphic_Tang, caligraphic_and caligraphic_ZhuangZhang caligraphic_et caligraphic_al. caligraphic_(2023c)zhang2023learning caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Changshuo caligraphic_Liu, caligraphic_Lingze caligraphic_Zeng, caligraphic_Bengchin caligraphic_Ooi, caligraphic_Siliang caligraphic_Tang, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2023c. caligraphic_Learning caligraphic_in caligraphic_Imperfect caligraphic_Environment: caligraphic_Multi-Label caligraphic_Classification caligraphic_with caligraphic_Long-Tailed caligraphic_Distribution caligraphic_and caligraphic_Partial caligraphic_Labels. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision. caligraphic_1423–1432. caligraphic_602024Zhang caligraphic_and caligraphic_LvZhang caligraphic_and caligraphic_LvZhang caligraphic_and caligraphic_Lv caligraphic_(2024)zhang2024revisiting caligraphic_Wenqiao caligraphic_Zhang caligraphic_and caligraphic_Zheqi caligraphic_Lv. caligraphic_2024. caligraphic_Revisiting caligraphic_the caligraphic_Domain caligraphic_Shift caligraphic_and caligraphic_Sample caligraphic_Uncertainty caligraphic_in caligraphic_Multi-source caligraphic_Active caligraphic_Domain caligraphic_Transfer. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_612021Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Shi, caligraphic_Guo, caligraphic_Zhang, caligraphic_Cai, caligraphic_Li, caligraphic_Luo, caligraphic_and caligraphic_ZhuangZhang caligraphic_et caligraphic_al. caligraphic_(2021)zhang2021magic caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Haochen caligraphic_Shi, caligraphic_Jiannan caligraphic_Guo, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Qingpeng caligraphic_Cai, caligraphic_Juncheng caligraphic_Li, caligraphic_Sihui caligraphic_Luo, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2021. caligraphic_MAGIC: caligraphic_Multimodal caligraphic_relAtional caligraphic_Graph caligraphic_adversarIal caligraphic_inferenCe caligraphic_for caligraphic_Diverse caligraphic_and caligraphic_Unpaired caligraphic_Text-based caligraphic_Image caligraphic_Captioning. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2112.06558 caligraphic_(2021). caligraphic_622022bZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Zhu, caligraphic_Hallinan, caligraphic_Zhang, caligraphic_Makmur, caligraphic_Cai, caligraphic_and caligraphic_OoiZhang caligraphic_et caligraphic_al. caligraphic_(2022b)zhang2022boostmis caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Lei caligraphic_Zhu, caligraphic_James caligraphic_Hallinan, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Andrew caligraphic_Makmur, caligraphic_Qingpeng caligraphic_Cai, caligraphic_and caligraphic_Beng caligraphic_Chin caligraphic_Ooi. caligraphic_2022b. caligraphic_Boostmis: caligraphic_Boosting caligraphic_medical caligraphic_image caligraphic_semi-supervised caligraphic_learning caligraphic_with caligraphic_adaptive caligraphic_pseudo caligraphic_labeling caligraphic_and caligraphic_informative caligraphic_active caligraphic_annotation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_20666–20676. caligraphic_632024Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Zhu, caligraphic_Song, caligraphic_Koniusz, caligraphic_King, caligraphic_et caligraphic_al.Zhang caligraphic_et caligraphic_al. caligraphic_(2024)zhang2024mitigating caligraphic_Yifei caligraphic_Zhang, caligraphic_Hao caligraphic_Zhu, caligraphic_Zixing caligraphic_Song, caligraphic_Piotr caligraphic_Koniusz, caligraphic_Irwin caligraphic_King, caligraphic_et caligraphic_al. caligraphic_2024. caligraphic_Mitigating caligraphic_the caligraphic_Popularity caligraphic_Bias caligraphic_of caligraphic_Graph caligraphic_Collaborative caligraphic_Filtering: caligraphic_A caligraphic_Dimensional caligraphic_Collapse caligraphic_Perspective. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_36 caligraphic_(2024). caligraphic_642018Zhou caligraphic_et caligraphic_al.Zhou, caligraphic_Zhu, caligraphic_Song, caligraphic_Fan, caligraphic_Zhu, caligraphic_Ma, caligraphic_Yan, caligraphic_Jin, caligraphic_Li, caligraphic_and caligraphic_GaiZhou caligraphic_et caligraphic_al. caligraphic_(2018)ref:din caligraphic_Guorui caligraphic_Zhou, caligraphic_Xiaoqiang caligraphic_Zhu, caligraphic_Chenru caligraphic_Song, caligraphic_Ying caligraphic_Fan, caligraphic_Han caligraphic_Zhu, caligraphic_Xiao caligraphic_Ma, caligraphic_Yanghui caligraphic_Yan, caligraphic_Junqi caligraphic_Jin, caligraphic_Han caligraphic_Li, caligraphic_and caligraphic_Kun caligraphic_Gai. caligraphic_2018. caligraphic_Deep caligraphic_interest caligraphic_network caligraphic_for caligraphic_click-through caligraphic_rate caligraphic_prediction. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_24th caligraphic_ACM caligraphic_SIGKDD caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_& caligraphic_Data caligraphic_Mining. caligraphic_1059–1068. caligraphic_652023aZhu caligraphic_et caligraphic_al.Zhu, caligraphic_Li, caligraphic_Shao, caligraphic_Hao, caligraphic_Wu, caligraphic_Kuang, caligraphic_Xiao, caligraphic_and caligraphic_WuZhu caligraphic_et caligraphic_al. caligraphic_(2023a)DBLP:conf/mm/ZhuL0HWK0W23 caligraphic_Didi caligraphic_Zhu, caligraphic_Yinchuan caligraphic_Li, caligraphic_Yunfeng caligraphic_Shao, caligraphic_Jianye caligraphic_Hao, caligraphic_Fei caligraphic_Wu, caligraphic_Kun caligraphic_Kuang, caligraphic_Jun caligraphic_Xiao, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2023a. caligraphic_Generalized caligraphic_Universal caligraphic_Domain caligraphic_Adaptation caligraphic_with caligraphic_Generative caligraphic_Flow caligraphic_Networks. caligraphic_In caligraphic_ACM caligraphic_Multimedia. caligraphic_ACM, caligraphic_8304–8315. caligraphic_662023bZhu caligraphic_et caligraphic_al.Zhu, caligraphic_Li, caligraphic_Yuan, caligraphic_Li, caligraphic_Kuang, caligraphic_and caligraphic_WuZhu caligraphic_et caligraphic_al. caligraphic_(2023b)zhu2023universal caligraphic_Didi caligraphic_Zhu, caligraphic_Yinchuan caligraphic_Li, caligraphic_Junkun caligraphic_Yuan, caligraphic_Zexi caligraphic_Li, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2023b. caligraphic_Universal caligraphic_domain caligraphic_adaptation caligraphic_via caligraphic_compressive caligraphic_attention caligraphic_matching. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision. caligraphic_6974–6985. caligraphic_Appendix caligraphic_AAppendix caligraphic_AAAppendix caligraphic_AAppendix caligraphic_AAppendixAAppendixThis caligraphic_is caligraphic_the caligraphic_Appendix caligraphic_for caligraphic_``Intelligent caligraphic_Model caligraphic_Update caligraphic_Strategy caligraphic_for caligraphic_Sequential caligraphic_Recommendation''.A.1subsection caligraphic_A.1A.1§A.1A.1Supplementary caligraphic_MethodA.1Supplementary caligraphic_MethodA.1.1subsubsection caligraphic_A.1.1A.1.1§A.1.1A.1.1Notations caligraphic_and caligraphic_DefinitionsA.1.1Notations caligraphic_and caligraphic_DefinitionsWe caligraphic_summarize caligraphic_notations caligraphic_and caligraphic_definitions caligraphic_in caligraphic_the caligraphic_Table caligraphic_.Table caligraphic_2Table caligraphic_22Table caligraphic_22Notations caligraphic_and caligraphic_DefinitionsTable caligraphic_2Notations caligraphic_and caligraphic_DefinitionsNotationDefinitionuUservItemsBehavior caligraphic_sequencedEdgeD={d(i)}i=1NdSet caligraphic_of caligraphic_edgesSH(i), caligraphic_SR(i), caligraphic_SMRDHistory caligraphic_samples, caligraphic_Real-time caligraphic_samples, caligraphic_MRD caligraphic_samplesNd, caligraphic_NH(i) caligraphic_and caligraphic_NR(i)The caligraphic_number caligraphic_of caligraphic_edges, caligraphic_The caligraphic_number caligraphic_of caligraphic_history caligraphic_data, caligraphic_The caligraphic_number caligraphic_of caligraphic_real-time caligraphic_dataΘg, caligraphic_Θd, caligraphic_ΘMRDParameters caligraphic_of caligraphic_the caligraphic_global caligraphic_cloud caligraphic_model, caligraphic_Parameters caligraphic_of caligraphic_the caligraphic_local caligraphic_edge caligraphic_modelMg(⋅;Θg), caligraphic_Md(i)(⋅;Θd(i)), caligraphic_Mc(i)t(SMRD;ΘMRD)Global caligraphic_cloud caligraphic_model, caligraphic_Local caligraphic_edge caligraphic_recommendation caligraphic_model, caligraphic_Local caligraphic_edge caligraphic_control caligraphic_modelLrec, caligraphic_LMRDLoss caligraphic_function caligraphic_of caligraphic_recommendation, caligraphic_Loss caligraphic_function caligraphic_of caligraphic_mis-recommendationΩFeature caligraphic_extractorA.1.2subsubsection caligraphic_A.1.2A.1.2§A.1.2A.1.2Optimization caligraphic_TargetA.1.2Optimization caligraphic_TargetTo caligraphic_describe caligraphic_it caligraphic_in caligraphic_the caligraphic_simplest caligraphic_way, caligraphic_we caligraphic_assume caligraphic_that caligraphic_the caligraphic_set caligraphic_of caligraphic_the caligraphic_edges caligraphic_is caligraphic_D={d(i)}i=1Nd, caligraphic_the caligraphic_set caligraphic_updated caligraphic_using caligraphic_the caligraphic_baseline caligraphic_method caligraphic_is caligraphic_D′u={d(i)}i=1N′u, caligraphic_the caligraphic_set caligraphic_updated caligraphic_using caligraphic_our caligraphic_method caligraphic_is caligraphic_Du={d(i)}i=1Nu. caligraphic_Nd, caligraphic_N′u, caligraphic_and caligraphic_Nu caligraphic_are caligraphic_the caligraphic_amount caligraphic_of caligraphic_the caligraphic_D, caligraphic_D′u caligraphic_and caligraphic_Du, caligraphic_respectively. caligraphic_The caligraphic_communication caligraphic_upper caligraphic_bound caligraphic_is caligraphic_set caligraphic_to caligraphic_Nthres. caligraphic_Suppose caligraphic_the caligraphic_ground-truth caligraphic_value caligraphic_y, caligraphic_and caligraphic_the caligraphic_prediction caligraphic_of caligraphic_the caligraphic_baseline caligraphic_models caligraphic_^y′, caligraphic_and caligraphic_the caligraphic_prediction caligraphic_of caligraphic_our caligraphic_model caligraphic_^y caligraphic_are caligraphic_row caligraphic_vectors. caligraphic_Therefore, caligraphic_our caligraphic_optimization caligraphic_target caligraphic_is caligraphic_to caligraphic_obtain caligraphic_the caligraphic_highest caligraphic_performance caligraphic_of caligraphic_the caligraphic_model caligraphic_while caligraphic_limiting caligraphic_the caligraphic_upper caligraphic_bound caligraphic_of caligraphic_the caligraphic_communication caligraphic_frequency.(21)Equation caligraphic_2121Maximize^yyT,Maximize^yyT,Subject caligraphic_to0≤Nu≤Nthres,Subject caligraphic_to0≤Nu≤Nthres,Nu≤N′u,Nu≤N′u,Du⊂D.Du⊂D.In caligraphic_this caligraphic_case, caligraphic_the caligraphic_improvement caligraphic_of caligraphic_our caligraphic_method caligraphic_is caligraphic_Δ=^yyT-^y′yT.Or caligraphic_it caligraphic_can caligraphic_also caligraphic_be caligraphic_regarded caligraphic_as caligraphic_reducing caligraphic_the caligraphic_communication caligraphic_frequency caligraphic_without caligraphic_degrading caligraphic_performance.(22)Equation caligraphic_2222MinimizeNuMinimizeNuSubject caligraphic_to0≤Nu≤Nthres,Subject caligraphic_to0≤Nu≤Nthres,^yyT≥^y′yT,^yyT≥^y′yT,Du⊂DDu⊂DIn caligraphic_this caligraphic_case, caligraphic_the caligraphic_improvement caligraphic_of caligraphic_our caligraphic_method caligraphic_is caligraphic_Δ=N-Nu.A.2subsection caligraphic_A.2A.2§A.2A.2Supplementary caligraphic_Experimental caligraphic_ResultsA.2Supplementary caligraphic_Experimental caligraphic_ResultsA.2.1subsubsection caligraphic_A.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We caligraphic_evaluate caligraphic_IntellectReq caligraphic_and caligraphic_baselines caligraphic_on caligraphic_Amazon caligraphic_CDs caligraphic_(CDs) caligraphic_2footnote caligraphic_22footnote caligraphic_2https://jmcauley.ucsd.edu/data/amazon/, caligraphic_Amazon caligraphic_Electronic caligraphic_(Electronic) caligraphic_, caligraphic_Douban caligraphic_Book caligraphic_(Book) caligraphic_3footnote caligraphic_33footnote caligraphic_3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, caligraphic_three caligraphic_widely caligraphic_used caligraphic_public caligraphic_benchmarks caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_tasks, caligraphic_Table caligraphic_shows caligraphic_the caligraphic_statistics. caligraphic_Following caligraphic_conventional caligraphic_practice, caligraphic_all caligraphic_user-item caligraphic_pairs caligraphic_in caligraphic_the caligraphic_dataset caligraphic_are caligraphic_treated caligraphic_as caligraphic_positive caligraphic_samples. caligraphic_To caligraphic_conduct caligraphic_sequential caligraphic_recommendation caligraphic_experiments, caligraphic_we caligraphic_arrange caligraphic_the caligraphic_items caligraphic_clicked caligraphic_by caligraphic_the caligraphic_user caligraphic_into caligraphic_a caligraphic_sequence caligraphic_in caligraphic_the caligraphic_order caligraphic_of caligraphic_timestamps. caligraphic_We caligraphic_also caligraphic_refer caligraphic_to caligraphic_(, caligraphic_), caligraphic_which caligraphic_is caligraphic_negatively caligraphic_sampled caligraphic_at caligraphic_1:4 caligraphic_and caligraphic_1:99 caligraphic_in caligraphic_the caligraphic_training caligraphic_set caligraphic_and caligraphic_testing caligraphic_set, caligraphic_respectively. caligraphic_Negative caligraphic_sampling caligraphic_considers caligraphic_all caligraphic_user-item caligraphic_pairs caligraphic_that caligraphic_do caligraphic_not caligraphic_exist caligraphic_in caligraphic_the caligraphic_dataset caligraphic_as caligraphic_negative caligraphic_samples.Table caligraphic_3Table caligraphic_33Table caligraphic_33Statistics caligraphic_of caligraphic_Datasets.Table caligraphic_3Statistics caligraphic_of caligraphic_Datasets.Amazon caligraphic_CDsAmazon caligraphic_ElectronicDouban caligraphic_Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsection caligraphic_A.2.2A.2.2§A.2.2A.2.2Evaluation caligraphic_MetricsA.2.2Evaluation caligraphic_MetricsIn caligraphic_the caligraphic_experiments, caligraphic_we caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC, caligraphic_Logloss, caligraphic_HitRate caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics caligraphic_to caligraphic_evaluate caligraphic_model caligraphic_performance. caligraphic_They caligraphic_are caligraphic_defined caligraphic_by caligraphic_the caligraphic_following caligraphic_equations.(23)Equation caligraphic_2323AUC=∑x0∈DT∑x1∈DF1[f(x1)<f(x0)]|DT||DF|,AUC=∑x0∈DT∑x1∈DF1[f(x1)<f(x0)]|DT||DF|,(24)Equation caligraphic_2424UAUC=1|U|∑u∈U∑x0∈DuT∑x1∈DuF1[f(x1)<f(x0)]|DuT||DuF|,UAUC=1|U|∑u∈U∑x0∈DuT∑x1∈DuF1[f(x1)<f(x0)]|DuT||DuF|,(25)Equation caligraphic_2525NDCG@K=∑u∈U1|U|21(Ru,gu≤K)-1log2(1(Ru,gu≤K)+1),NDCG@K=∑u∈U1|U|21(Ru,gu≤K)-1log2(1(Ru,gu≤K)+1),(26)Equation caligraphic_2626HitRate@K=1|U|∑u∈U1(Ru,gu≤K),HitRate@K=1|U|∑u∈U1(Ru,gu≤K),In caligraphic_the caligraphic_equation caligraphic_above, caligraphic_1(⋅) caligraphic_is caligraphic_the caligraphic_indicator caligraphic_function. caligraphic_f caligraphic_is caligraphic_the caligraphic_model caligraphic_to caligraphic_be caligraphic_evaluated. caligraphic_Ru,gu caligraphic_is caligraphic_the caligraphic_rank caligraphic_predicted caligraphic_by caligraphic_the caligraphic_model caligraphic_for caligraphic_the caligraphic_ground caligraphic_truth caligraphic_item caligraphic_gu caligraphic_and caligraphic_user caligraphic_u. caligraphic_DT, caligraphic_DF caligraphic_is caligraphic_the caligraphic_positive caligraphic_and caligraphic_negative caligraphic_testing caligraphic_sample caligraphic_set, caligraphic_respectively, caligraphic_and caligraphic_DuT, caligraphic_DuF caligraphic_is caligraphic_the caligraphic_positive caligraphic_and caligraphic_negative caligraphic_testing caligraphic_sample caligraphic_set caligraphic_for caligraphic_user caligraphic_u caligraphic_respectively.A.2.3subsubsection caligraphic_A.2.3A.2.3§A.2.3A.2.3Request caligraphic_Frequency caligraphic_and caligraphic_ThresholdA.2.3Request caligraphic_Frequency caligraphic_and caligraphic_ThresholdFigure caligraphic_shows caligraphic_that caligraphic_the caligraphic_relationship caligraphic_between caligraphic_request caligraphic_frequency caligraphic_and caligraphic_different caligraphic_threshold.Figure caligraphic_10Figure caligraphic_1010Figure caligraphic_1010Request caligraphic_frequency caligraphic_w.r.t. caligraphic_different caligraphic_thresholdFigure caligraphic_10Request caligraphic_frequency caligraphic_w.r.t. caligraphic_different caligraphic_thresholdA.3subsection caligraphic_A.3A.3§A.3A.3Training caligraphic_Procedure caligraphic_and caligraphic_Inference caligraphic_ProcedureA.3Training caligraphic_Procedure caligraphic_and caligraphic_Inference caligraphic_ProcedureIn caligraphic_this caligraphic_section, caligraphic_we caligraphic_describe caligraphic_the caligraphic_overall caligraphic_pipeline caligraphic_in caligraphic_detail caligraphic_in caligraphic_conjunction caligraphic_with caligraphic_Figure caligraphic_.Figure caligraphic_11Figure caligraphic_1111Figure caligraphic_1111The caligraphic_overall caligraphic_pipeline caligraphic_of caligraphic_our caligraphic_proposed caligraphic_IntellectReq.Figure caligraphic_11The caligraphic_overall caligraphic_pipeline caligraphic_of caligraphic_our caligraphic_proposed caligraphic_IntellectReq.1. caligraphic_Training caligraphic_Procedure① caligraphic_We caligraphic_first caligraphic_pre-trained caligraphic_a caligraphic_EC-CDR caligraphic_framework, caligraphic_and caligraphic_EC-CDR caligraphic_can caligraphic_use caligraphic_data caligraphic_to caligraphic_generate caligraphic_model caligraphic_parameters.② caligraphic_MRD caligraphic_training caligraphic_procedure. caligraphic_1) caligraphic_Construct caligraphic_the caligraphic_MRD caligraphic_dataset. caligraphic_We caligraphic_assume caligraphic_that caligraphic_the caligraphic_time caligraphic_at caligraphic_this caligraphic_time caligraphic_is caligraphic_T, caligraphic_and caligraphic_then caligraphic_we caligraphic_use caligraphic_the caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_by caligraphic_the caligraphic_data caligraphic_at caligraphic_moment caligraphic_t=0 caligraphic_under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_and caligraphic_the caligraphic_model caligraphic_is caligraphic_applied caligraphic_to caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_current caligraphic_moment caligraphic_t=T. caligraphic_At caligraphic_this caligraphic_point, caligraphic_we caligraphic_can caligraphic_get caligraphic_a caligraphic_prediction caligraphic_result caligraphic_^y, caligraphic_compare caligraphic_^y caligraphic_with caligraphic_y caligraphic_to caligraphic_get caligraphic_whether caligraphic_the caligraphic_model caligraphic_do caligraphic_mis-recommendation. caligraphic_We caligraphic_then caligraphic_repeat caligraphic_the caligraphic_data caligraphic_used caligraphic_for caligraphic_parameter caligraphic_generation caligraphic_from caligraphic_t=0 caligraphic_to caligraphic_t=T-1, caligraphic_which caligraphic_constructs caligraphic_an caligraphic_MRD caligraphic_dataset. caligraphic_It caligraphic_contains caligraphic_three caligraphic_columns, caligraphic_namely: caligraphic_the caligraphic_data caligraphic_used caligraphic_for caligraphic_parameter caligraphic_generation caligraphic_(x1), caligraphic_the caligraphic_current caligraphic_data caligraphic_(x2), caligraphic_and caligraphic_whether caligraphic_it caligraphic_do caligraphic_mis-recommendation caligraphic_(yMRD). caligraphic_2) caligraphic_Train caligraphic_MRD. caligraphic_MRD caligraphic_is caligraphic_a caligraphic_fully caligraphic_connected caligraphic_neural caligraphic_network caligraphic_that caligraphic_takes caligraphic_x1 caligraphic_and caligraphic_x2 caligraphic_as caligraphic_input caligraphic_and caligraphic_fits caligraphic_the caligraphic_mis-recommendation caligraphic_label caligraphic_yMRD. caligraphic_And caligraphic_then caligraphic_we caligraphic_get caligraphic_the caligraphic_MRD. caligraphic_MRD caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_determine caligraphic_whether caligraphic_the caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_using caligraphic_the caligraphic_data caligraphic_at caligraphic_a caligraphic_certain caligraphic_moment caligraphic_before caligraphic_are caligraphic_still caligraphic_valid caligraphic_for caligraphic_the caligraphic_current caligraphic_data. caligraphic_The caligraphic_prediction caligraphic_result caligraphic_output caligraphic_by caligraphic_MRD caligraphic_can caligraphic_be caligraphic_simply caligraphic_considered caligraphic_as caligraphic_Mis-Recommendation caligraphic_Score caligraphic_(MRS).③ caligraphic_DM caligraphic_training caligraphic_procedure. caligraphic_We caligraphic_map caligraphic_the caligraphic_data caligraphic_into caligraphic_a caligraphic_Gaussian caligraphic_distribution caligraphic_through caligraphic_the caligraphic_Conditional-VAE caligraphic_method, caligraphic_and caligraphic_then caligraphic_sample caligraphic_the caligraphic_feature caligraphic_vector caligraphic_from caligraphic_the caligraphic_distribution caligraphic_to caligraphic_complete caligraphic_the caligraphic_next-item caligraphic_prediction caligraphic_task, caligraphic_that caligraphic_is, caligraphic_to caligraphic_predict caligraphic_the caligraphic_item caligraphic_that caligraphic_the caligraphic_user caligraphic_will caligraphic_click caligraphic_next. caligraphic_Then caligraphic_we caligraphic_can caligraphic_get caligraphic_DM. caligraphic_DM caligraphic_can caligraphic_calculate caligraphic_multiple caligraphic_next-items caligraphic_by caligraphic_sampling caligraphic_from caligraphic_the caligraphic_distribution caligraphic_multiple caligraphic_times, caligraphic_which caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_calculate caligraphic_Uncertainty.④ caligraphic_Joint caligraphic_training caligraphic_procedure caligraphic_of caligraphic_MRD caligraphic_and caligraphic_DM. caligraphic_We caligraphic_use caligraphic_a caligraphic_fully caligraphic_connected caligraphic_neural caligraphic_network, caligraphic_denoted caligraphic_as caligraphic_f(⋅), caligraphic_and caligraphic_use caligraphic_MRS caligraphic_and caligraphic_Uncertainty caligraphic_as caligraphic_input caligraphic_to caligraphic_fit caligraphic_yMRD caligraphic_in caligraphic_the caligraphic_MRD caligraphic_dataset, caligraphic_which caligraphic_is caligraphic_the caligraphic_Mis-Recommendation caligraphic_Label.2. caligraphic_Inference caligraphic_ProcedureThe caligraphic_MRS caligraphic_is caligraphic_calculated caligraphic_using caligraphic_all caligraphic_recent caligraphic_user caligraphic_data caligraphic_on caligraphic_the caligraphic_cloud, caligraphic_and caligraphic_the caligraphic_threshold caligraphic_of caligraphic_the caligraphic_MRS caligraphic_is caligraphic_determined caligraphic_according caligraphic_to caligraphic_the caligraphic_load. caligraphic_Then caligraphic_send caligraphic_this caligraphic_threshold caligraphic_to caligraphic_each caligraphic_edge. caligraphic_The caligraphic_edge caligraphic_has caligraphic_updated caligraphic_the caligraphic_model caligraphic_at caligraphic_a caligraphic_certain caligraphic_moment caligraphic_t=n,n<T caligraphic_before, caligraphic_and caligraphic_now caligraphic_whether caligraphic_it caligraphic_is caligraphic_necessary caligraphic_to caligraphic_continue caligraphic_to caligraphic_update caligraphic_the caligraphic_model caligraphic_at caligraphic_moment caligraphic_t=T, caligraphic_that caligraphic_is, caligraphic_whether caligraphic_the caligraphic_model caligraphic_is caligraphic_invalid caligraphic_for caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution? caligraphic_We caligraphic_only caligraphic_need caligraphic_to caligraphic_input caligraphic_the caligraphic_MRD caligraphic_and caligraphic_Uncertainty caligraphic_calculated caligraphic_by caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_moment caligraphic_t=n caligraphic_data caligraphic_and caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_moment caligraphic_t=T caligraphic_into caligraphic_f(⋅) caligraphic_for caligraphic_determine. caligraphic_In caligraphic_fact, caligraphic_what caligraphic_we caligraphic_output caligraphic_is caligraphic_a caligraphic_invalid caligraphic_degree, caligraphic_which caligraphic_is caligraphic_a caligraphic_continuous caligraphic_value caligraphic_between caligraphic_0 caligraphic_and caligraphic_1. caligraphic_Whether caligraphic_to caligraphic_update caligraphic_the caligraphic_edge caligraphic_model caligraphic_depends caligraphic_on caligraphic_the caligraphic_threshold caligraphic_calculated caligraphic_on caligraphic_the caligraphic_cloud caligraphic_based caligraphic_on caligraphic_the caligraphic_load.A.4subsection caligraphic_A.4A.4§A.4A.4Hyperparameters caligraphic_and caligraphic_Training caligraphic_SchedulesA.4Hyperparameters caligraphic_and caligraphic_Training caligraphic_SchedulesWe caligraphic_summarize caligraphic_the caligraphic_hyperparameters caligraphic_and caligraphic_training caligraphic_schedules caligraphic_of caligraphic_IntellectReq caligraphic_on caligraphic_the caligraphic_three caligraphic_datasets caligraphic_in caligraphic_Table caligraphic_.Table caligraphic_4Table caligraphic_44Table caligraphic_44Hyperparameters caligraphic_and caligraphic_training caligraphic_schedules.Table caligraphic_4Hyperparameters caligraphic_and caligraphic_training caligraphic_schedules.DatasetParametersSetting caligraphic_Amazon caligraphic_CDsAmazon caligraphic_ElectronicDouban caligraphic_Book caligraphic_GPUTesla caligraphic_A100OptimizerAdam caligraphic_Learning caligraphic_rate0.001 caligraphic_Batch caligraphic_size1024 caligraphic_Sequence caligraphic_length30 caligraphic_the caligraphic_Dimension caligraphic_of caligraphic_z1×64N32n10A.4.1subsubsection caligraphic_A.4.1A.4.1§A.4.1A.4.1Impact caligraphic_on caligraphic_the caligraphic_Real caligraphic_World.A.4.1Impact caligraphic_on caligraphic_the caligraphic_Real caligraphic_World.A caligraphic_case caligraphic_based caligraphic_on caligraphic_a caligraphic_dynamic caligraphic_model caligraphic_from caligraphic_the caligraphic_previous caligraphic_moment caligraphic_is caligraphic_as caligraphic_follows. caligraphic_If caligraphic_it caligraphic_were caligraphic_based caligraphic_on caligraphic_a caligraphic_on-edge caligraphic_static caligraphic_model, caligraphic_the caligraphic_improvement caligraphic_would caligraphic_be caligraphic_much caligraphic_more caligraphic_significant. caligraphic_We caligraphic_found caligraphic_some caligraphic_more caligraphic_intuitive caligraphic_data caligraphic_and caligraphic_examples caligraphic_to caligraphic_show caligraphic_the caligraphic_challenge caligraphic_and caligraphic_IntellectReq's caligraphic_impact caligraphic_on caligraphic_the caligraphic_real caligraphic_world:Table caligraphic_5Table caligraphic_55Table caligraphic_55IntellectReq's caligraphic_Impact caligraphic_on caligraphic_Real caligraphic_World.Table caligraphic_5IntellectReq's caligraphic_Impact caligraphic_on caligraphic_Real caligraphic_World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) caligraphic_We caligraphic_calculate caligraphic_the caligraphic_number caligraphic_of caligraphic_bytes caligraphic_and caligraphic_FLOPs caligraphic_required caligraphic_to caligraphic_update caligraphic_a caligraphic_parameter. caligraphic_Bytes: caligraphic_48.5kB, caligraphic_FLOPs: caligraphic_1.53M. caligraphic_That caligraphic_is, caligraphic_updating caligraphic_a caligraphic_model caligraphic_on caligraphic_the caligraphic_edge caligraphic_requires caligraphic_the caligraphic_transmission caligraphic_of caligraphic_48.5kB caligraphic_data caligraphic_through caligraphic_edge-cloud caligraphic_communication, caligraphic_and caligraphic_consumes caligraphic_1.53M caligraphic_computing caligraphic_power caligraphic_of caligraphic_the caligraphic_cloud caligraphic_model. caligraphic_(2) caligraphic_According caligraphic_to caligraphic_the caligraphic_report, caligraphic_Google caligraphic_processes caligraphic_99,000 caligraphic_clicks caligraphic_per caligraphic_second, caligraphic_so caligraphic_it caligraphic_needs caligraphic_to caligraphic_transmit caligraphic_48.5kB∗99k=4.69GB caligraphic_per caligraphic_second, caligraphic_and caligraphic_consume caligraphic_1.53M∗99k=152.46G caligraphic_of caligraphic_computing caligraphic_power caligraphic_in caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_Alibaba caligraphic_processes caligraphic_1,150,000 caligraphic_clicks caligraphic_per caligraphic_second, caligraphic_so caligraphic_it caligraphic_needs caligraphic_to caligraphic_transmit caligraphic_48.5kB∗1150k=53.19GB caligraphic_per caligraphic_second, caligraphic_and caligraphic_consume caligraphic_1.53M∗1150k=1.68T caligraphic_of caligraphic_computing caligraphic_power caligraphic_in caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_These caligraphic_are caligraphic_not caligraphic_the caligraphic_peak caligraphic_value caligraphic_yet. caligraphic_Obviously, caligraphic_such caligraphic_a caligraphic_huge caligraphic_loan caligraphic_and caligraphic_computing caligraphic_power caligraphic_consumption caligraphic_make caligraphic_it caligraphic_hard caligraphic_to caligraphic_update caligraphic_the caligraphic_model caligraphic_for caligraphic_edges caligraphic_every caligraphic_moment caligraphic_especially caligraphic_at caligraphic_peak caligraphic_times. caligraphic_(3) caligraphic_Sometimes, caligraphic_the caligraphic_distributed caligraphic_nature caligraphic_of caligraphic_clouds caligraphic_today caligraphic_may caligraphic_can caligraphic_afford caligraphic_the caligraphic_computational caligraphic_volume, caligraphic_since caligraphic_it caligraphic_can caligraphic_call caligraphic_enough caligraphic_servers caligraphic_to caligraphic_support caligraphic_edge-cloud caligraphic_collaboration. caligraphic_However, caligraphic_the caligraphic_huge caligraphic_resource caligraphic_consumption caligraphic_is caligraphic_impractical caligraphic_in caligraphic_real-scenario. caligraphic_Besides, caligraphic_according caligraphic_to caligraphic_our caligraphic_empirical caligraphic_study, caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_bring caligraphic_21.4% caligraphic_resource caligraphic_saving caligraphic_when caligraphic_the caligraphic_performance caligraphic_is caligraphic_the caligraphic_same caligraphic_using caligraphic_the caligraphic_APG caligraphic_framework. caligraphic_Under caligraphic_the caligraphic_DUET caligraphic_framework, caligraphic_IntellectReq caligraphic_can caligraphic_bring caligraphic_16.6% caligraphic_resource caligraphic_saving caligraphic_when caligraphic_the caligraphic_performance caligraphic_is caligraphic_the caligraphic_same. caligraphic_Summing caligraphic_up, caligraphic_IntellectReq caligraphic_can caligraphic_save caligraphic_19% caligraphic_resources caligraphic_on caligraphic_average, caligraphic_which caligraphic_is caligraphic_very caligraphic_helpful caligraphic_for caligraphic_cost caligraphic_control caligraphic_and caligraphic_can caligraphic_facilitate caligraphic_the caligraphic_EC-CDR caligraphic_development caligraphic_in caligraphic_practice. caligraphic_The caligraphic_following caligraphic_Table caligraphic_is caligraphic_the caligraphic_comparison caligraphic_between caligraphic_our caligraphic_method caligraphic_IntellectReq caligraphic_and caligraphic_EC-CDR caligraphic_in caligraphic_the caligraphic_amount caligraphic_of caligraphic_transmitted caligraphic_data caligraphic_and caligraphic_the caligraphic_computing caligraphic_power caligraphic_consumed caligraphic_on caligraphic_the caligraphic_cloud. caligraphic_(4) caligraphic_During caligraphic_the caligraphic_peak caligraphic_period, caligraphic_resources caligraphic_will caligraphic_be caligraphic_tight caligraphic_and caligraphic_cause caligraphic_freezes caligraphic_or caligraphic_even caligraphic_crashes. caligraphic_This caligraphic_is caligraphic_still caligraphic_in caligraphic_the caligraphic_case caligraphic_that caligraphic_EC-CDR caligraphic_has caligraphic_not caligraphic_been caligraphic_deployed caligraphic_yet, caligraphic_that caligraphic_is, caligraphic_the caligraphic_edge-cloud caligraphic_communication caligraphic_only caligraphic_performs caligraphic_the caligraphic_most caligraphic_basic caligraphic_user caligraphic_data caligraphic_transmission. caligraphic_Then, caligraphic_IntellectReq caligraphic_can caligraphic_achieve caligraphic_better caligraphic_performance caligraphic_than caligraphic_EC-CDR caligraphic_under caligraphic_any caligraphic_resource caligraphic_limit caligraphic_ϵ, caligraphic_or caligraphic_to caligraphic_achieve caligraphic_the caligraphic_performance caligraphic_that caligraphic_EC-CDR caligraphic_requires caligraphic_ϵ+19% caligraphic_of caligraphic_resources caligraphic_to caligraphic_achieve.
Zheqi LvZhejiang UniversityHangzhouChinazheqilv@zju.edu.cn,Wenqiao ZhangZhejiang UniversityHangzhouChinawenqiaozhang@zju.edu.cn,Zhengyu ChenZhejiang UniversityHangzhouChinachenzhengyu@zju.edu.cn,Shengyu ZhangZhejiang UniversityHangzhouChinasy_zhang@zju.edu.cnandKun KuangZhejiang UniversityHangzhouChinakunkuang@zju.edu.cn(2024)Abstract.Modern online platforms are increasingly employing recommendation systems to address information overload and improve user engagement.There is an evolving paradigm in this research field that recommendation network learning occurs both on the cloud and on edges with knowledge transfer in between (i.e., edge-cloud collaboration). Recent works push this field further by enabling edge-specific context-aware adaptivity, where model parameters are updated in real-time based on incoming on-edge data. However, we argue that frequent data exchanges between the cloud and edges often lead to inefficiency and waste of communication/computation resources, as considerable parameter updates might be redundant. To investigate this problem, we introduce Intelligent Edge-Cloud Parameter Request Model(IntellectReq).IntellectReq is designed to operate on edge, evaluating the cost-benefit landscape of parameter requests with minimal computation and communication overhead. We formulate this as a novel learning task, aimed at the detection of out-of-distribution data, thereby fine-tuning adaptive communication strategies. Further, we employ statistical mapping techniques to convert real-time user behavior into a normal distribution, thereby employing multi-sample outputs to quantify the model’s uncertainty and thus its generalization capabilities. Rigorous empirical validation on three widely-adopted benchmarks evaluates our approach, evidencing a marked improvement in the efficiency and generalizability of edge-cloud collaborative and dynamic recommendation systems.Edge-Cloud Collaboration, Distribution Shift, Mis-Recommendation Detection, Out-of-Domain Detection, Sequential Recommendation††journalyear: 2024††copyright: acmlicensed††conference: Proceedings of the ACM Web Conference 2024; May 13–17, 2024; Singapore, Singapore††booktitle: Proceedings of the ACM Web Conference 2024 (WWW ’24), May 13–17, 2024, Singapore, Singapore††doi: 10.1145/3589334.3645316††isbn: 979-8-4007-0171-9/24/05††ccs: Information systemsMobile information processing systems††ccs: Information systemsPersonalization††ccs: Human-centered computingMobile computing1. IntroductionWith the rapid development of e-commerce and social media platforms, recommendation systems(Hidasi etal., 2016; Kang and McAuley, 2018; Zhang etal., 2023a; Lv etal., 2022; Zhang etal., 2024) have become indispensable tools in people’s daily life. They can be recognized as various forms depending on industries, like product suggestions on online e-commerce websites, (e.g., Amazon and Taobao) or playlist generators for video and music services (e.g., YouTube, Netflix, and Spotify). Among them, one of the classical recommendation systems in the industry prefers to trains a universal model with static parameters on a powerful cloud conditioned on rich data collected from different edges, and then perform edge inference for all users, such as e.g., DIN(Zhou etal., 2018), SASRec(Kang and McAuley, 2018), and GRU4Rec(Hidasi etal., 2016). In the first model presented in Figure1, this cloud-based static model allows users to share a centralized model, enabling real-time inference across all edges. However, it does not take advantage of the personalized recommendation patterns specific to each edge due to the shift in data distribution between the cloud and edge. As we all know, the shift in the distribution of test data compared to training data will reduce the performance of the model(Chen and Wang, 2021; Chen etal., 2022; Zhang etal., 2021, 2022b, 2023c; Zhu etal., 2023b, a; Tong etal., 2023; Zhang etal., 2022a, 2023b; Zhang and Lv, 2024).To address this issue, existing solutions can be broadly classified into two categories: (i) On-Edge Learning: It improve personalization by on-edge learning with the second method depicted in Figure1(a), based on the on-edge static model. Techniques such as distillation(Sanh etal., 2019) and fine-tuning(Cai etal., 2020) can mitigate the discrepancy between edge and cloud distributions through re-training at the edge. However, retraining at the edge involves a significant amount of computation, particularly in backpropagation. The sudden drop in real-time performance also reduces its practicality. (ii) Edge-Cloud Collaboration(Yao etal., 2022a; Yan etal., 2022a): It leverages the edge-cloud collaboration to efficiently update the parameters of the edge-model according to on-edge real-time data distribution(Lv etal., 2023b; Yan etal., 2022b). Recent advancements have introduced a technique known as adaptive parameter generation(Yan etal., 2022b; Lv etal., 2023b) (shown as the third method in Figure1(a)), which facilitating model personalization without additional on-edge computational cost. This method specifically utilizes a pre-trained hypernetwork(Ha etal., 2017) to convert the user’s real-time click sequence into adaptive parameters through forward propagation. These parameters then be updated to the edge model, allowing it to better fit real-time data distribution for swift personalization of recommendations. This method, termed the Edge-Cloud Collaborative and Dynamic Recommendation(EC-CDR), offers tailored recommendation models across various on-edge distribution.EC-CDR faces deployment challenges in real-world settings due to two key issues:(i) High Request Frequency. Updating EC-CDR model parameters through edge-cloud communication after a user clicks a new item causes a surge in concurrent cloud requests from multiple edges in industrial settings. This problem worsens in unstable networks, limiting EC-CDR’s efficiency due to communication and network constraints. (ii) Low Communication Revenue. When the latest real-time data is the same as, or closely related to, the distribution used previously to update model parameters, communication from edge to cloud is unnecessary. That is, the moment of distribution shift does not always coincide with the timing of model updates at the edge. Unnecessary communication between cloud and edge can lead to low efficiency in communication resource utilization.To address EC-CDR’s communication issues, we analyzed users’ click classes (viewed as domains) on the edge. As shown in Figure2, by collecting item embedding vectors from user clicks across three datasets and classifying them into 50 domains, we found that users typically engage with only 10 to 15 domains. This repeated behavior indicates a failure of EC-CDR to recognize shifts in data distribution on the edge, resulting in frequent dynamic parameter requests and high communication overhead.Based on the insights discussed earlier, our primary optimization goal is to minimize unnecessary communications, aiming for a highly efficient EC-CDR system. To achieve this, we design IntellectReq for deployment on the edge, tasked with assessing the necessity of requests with minimal resource usage. This strategy significantly boosts efficient communication in EC-CDR. IntellectReq is operationalized through the development of the Mis-Recommendation Detector (MRD) and Distribution Mapper (DM).The MRD is engineered to predict the likelihood of edge recommendation models making incorrect recommendations, termed as Mis-Recommendations. It accomplishes this by learning to map current data and previous data which is used to update the last model to mis-recommendation labels. Moreover, MRD translates these predictions into the potential revenue from updating the edge model, thus maximizing revenue within any communication budget and ensuring the model’s optimal performance. The DM is designed to allow the model to detect potential shifts in data distribution and assess the model’s uncertainty in interpreting real-time data, which in turn, augments the capabilities of the MRD module. It comprises three components: a prior network, a posterior network, and a next-item prediction network, with the last serving as DM’s backbone. During the training phase, data features are extracted through both prior and posterior networks, using label-provided prior information to enhance training efficiency. In the inference stage, the posterior network is utilized for feature extraction. By evaluating the model’s uncertainty in processing real-time data—achieved by mapping this data to a normal distribution—DM significantly improves MRD’s prediction accuracy. The conventional recommendation datasets prove inadequate for these tasks. Therefore, we have restructured these datasets into a new MRD dataset, eliminating the need for extra annotations. This restructuring process provides essential supervisory data for training our MRD and DM models, ensuring their effectiveness in the EC-CDR system.To summarize, our contributions are four-fold:•We are the first to point out and introduce IntellectReq to address the issues of high communication frequency and low communication revenue in EC-CDR, a method that improves edge recommendation models to SOTA performance and achieve personalized updates without retraining.•We designed IntellectReq and developed both a Mis-Recommendation Detector (MRD) and a Distribution Mapper (DM) to instantiate IntellectReq. IntellectReq can quantify changes in the data distribution on the edge, and based on the actual communication budget or cloud computing budget, it can determine which edge models need to be updated.•We construct Mis-Recommendation datasets from existing recommendation datasets, as current datasets are not suitable for training IntellectReq, thereby enabling its training without requiring additional manual annotations.•We evaluate our method with extensive experiments. Experiments demonstrate that IntellectReq can achieve high revenue under any edge-cloud communication budget.2. Related workEdge-cloud Collaboration.Deep learning applications are widely used(Wang etal., 2017; Li etal., 2022b, 2023c, 2023d; Wu etal., 2023a, b; Tang etal., 2024b; Qin etal., 2020), but they are fundamentally resource-intensive and difficult to deploy on the edge(Tang etal., 2024a; Chen etal., 2024, 2023; Huang etal., 2022b, 2023, a; Cao etal., 2023; Li etal., 2023a, 2022a, b), so edge-cloud collaboration(Yao etal., 2022b; Qian etal., 2022) is playing an increasingly important role. Cloud-based and on-edge machine learning are two distinct approaches with different benefits and drawbacks. Edge-cloud collaboration can take advantage of them and make them complement one another. Federated learning, such as FedAVG(McMahan etal., 2017), is one of the most well-known forms of edge-cloud collaboration. Federated learning is also often used for various tasks such as multi-task learning(Mills etal., 2021; Marfoq etal., 2021), etc. But the federated learning method for edge-cloud collaboration is too rigid for many real-world scenarios. (Yao etal., 2022a) designs multiple models with the same functions but different training processes, and a Meta Controller is used to determine which model should be used. EC-CDR, such as DUET(Lv etal., 2023b), draw inspiration from the HyperNetwork concept, ensuring that models on the edge can generalize well to the current data distribution at every moment without the need for any training on the edge. However, high request frequency and low communication revenue significantly reduce their practicality. This paper focuses on addressing these shortcomings of EC-CDR.Sequential Recommendation.Sequential recommendation models the user’s historical behavior sequence.Previous sequential recommendation algorithm such as (Rendle etal., 2010) and (Latifi etal., 2021) are non-deep learning based and uses Markov decision chains to model behavioral sequences. To improve the performance of the model, recent works(Hidasi etal., 2016; Zhou etal., 2018; Kang and McAuley, 2018; Sun etal., 2019; Wu etal., 2019; Chang etal., 2021; Zhang etal., 2023a, 2020; Chen etal., 2021; Lv etal., 2023a, 2022; Su etal., 2023b, a; Ji etal., 2023b, a; Li etal., 2023e, 2024; Lin etal., 2023; XinyuLin and Chua, 2024) propose the sequential recommendation model based on deep learning. Among them, the most well-known sequence recommendation models are as follows: GRU4Rec(Hidasi etal., 2016) uses GRU to model behavior sequences and achieves excellent performance. DIN(Zhou etal., 2018) and SASRec(Kang and McAuley, 2018) algorithms, respectively, introduce attention and transformer into sequential recommendation, which is fast and efficient. These methods are relatively influential in both academia and industry. In practical settings, deploying recommendation models at the edge faces constraints due to limited parameters and complexity, alongside the need for real-time operation which hampers real-time model updates using conventional methods. This impacts the model’s generalization capability across different data distributions. This paper explores methods to lower communication costs for a more efficient EC-CDR paradigm.3. MethodologyWe describe the proposed IntellectReq in this section by presenting each module and introduce the learning strategy of IntellectReq.3.1. Problem FormulationIn EC-CDR, we have access to a set of edges 𝒟={d(i)}i=1𝒩d𝒟superscriptsubscriptsuperscript𝑑𝑖𝑖1subscript𝒩𝑑\mathcal{D}=\{d^{(i)}\}_{i=1}^{\mathcal{N}_{d}}caligraphic_D = { italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, where each edge with its personal i.i.d history samples 𝒮H(i)={xH(i)(j,t)={uH(i)(j),vH(i)(j),sH(i)(j,t)},yH(i)(j)}j=1𝒩H(i)subscript𝒮superscript𝐻𝑖superscriptsubscriptsubscriptsuperscript𝑥𝑗𝑡superscript𝐻𝑖subscriptsuperscript𝑢𝑗superscript𝐻𝑖subscriptsuperscript𝑣𝑗superscript𝐻𝑖subscriptsuperscript𝑠𝑗𝑡superscript𝐻𝑖subscriptsuperscript𝑦𝑗superscript𝐻𝑖𝑗1subscript𝒩superscript𝐻𝑖\mathcal{S}_{H^{(i)}}=\{x^{(j,t)}_{H^{(i)}}=\{u^{(j)}_{H^{(i)}},v^{(j)}_{H^{(i%)}},s^{(j,t)}_{H^{(i)}}\},y^{(j)}_{H^{(i)}}\}_{j=1}^{\mathcal{N}_{H^{(i)}}}caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_x start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_u start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_v start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } , italic_y start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and real-time samples 𝒮R(i)={xR(i)(j,t)={uR(i)(j),vR(i)(j),sR(i)(j,t)}}j=1𝒩R(i)subscript𝒮superscript𝑅𝑖superscriptsubscriptsubscriptsuperscript𝑥𝑗𝑡superscript𝑅𝑖subscriptsuperscript𝑢𝑗superscript𝑅𝑖subscriptsuperscript𝑣𝑗superscript𝑅𝑖subscriptsuperscript𝑠𝑗𝑡superscript𝑅𝑖𝑗1subscript𝒩superscript𝑅𝑖\mathcal{S}_{R^{(i)}}=\{x^{(j,t)}_{R^{(i)}}=\{u^{(j)}_{R^{(i)}},v^{(j)}_{R^{(i%)}},s^{(j,t)}_{R^{(i)}}\}\}_{j=1}^{\mathcal{N}_{R^{(i)}}}caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_x start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT = { italic_u start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_v start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT in the current session, where 𝒩dsubscript𝒩𝑑\mathcal{N}_{d}caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, 𝒩H(i)subscript𝒩superscript𝐻𝑖\mathcal{N}_{H^{(i)}}caligraphic_N start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT and 𝒩R(i)subscript𝒩superscript𝑅𝑖\mathcal{N}_{R^{(i)}}caligraphic_N start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT represent the number of edges, history data, and real-time data, respectively. u𝑢uitalic_u, v𝑣vitalic_v and s𝑠sitalic_s represent user, item and click sequence composed of items. It should be noted that s(j,t)superscript𝑠𝑗𝑡s^{(j,t)}italic_s start_POSTSUPERSCRIPT ( italic_j , italic_t ) end_POSTSUPERSCRIPT represents the click sequence at moment t𝑡titalic_t in the j𝑗jitalic_j-th sample.The goal of EC-CDR is to generalize a trained global cloud model ℳg(⋅;Θg)subscriptℳ𝑔⋅subscriptΘ𝑔\mathcal{M}_{g}(\cdot;\Theta_{g})caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ( ⋅ ; roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ) learned from {𝒮H(i)}i=1𝒩dsuperscriptsubscriptsubscript𝒮superscript𝐻𝑖𝑖1subscript𝒩𝑑\{\mathcal{S}_{H^{(i)}}\}_{i=1}^{\mathcal{N}_{d}}{ caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT to each specific local edge model ℳd(i)(⋅;Θd(i))subscriptℳsuperscript𝑑𝑖⋅subscriptΘsuperscript𝑑𝑖\mathcal{M}_{d^{(i)}}(\cdot;\Theta_{d^{(i)}})caligraphic_M start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( ⋅ ; roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) conditioned on real-time samples 𝒮R(i)subscript𝒮superscript𝑅𝑖\mathcal{S}_{R^{(i)}}caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT, where ΘgsubscriptΘ𝑔\Theta_{g}roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and Θd(i)subscriptΘsuperscript𝑑𝑖\Theta_{d^{(i)}}roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT respectively denote the learned parameters for the global cloud model and local edge model.(26)EC-CDR:⏟Mg({SH(i)}=i1Nd;Θg)GlobalCloudModel[Parameters]Data←----→⏟Md(i)(SR(i);Θd(i))LocalEdgeModel.Todeterminewhethertorequestparametersfromthecloud,IntellectRequsesS__MRDtolearnaMis−RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC−CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation2626:IntellectReqControl→⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModel⏟(Mg[Parameters]Data←----→Md(i))-ECCDR.3.2subsection3.23.2§3.23.2IntellectReq3.2IntellectReqFigure3 is the overview of Recommendation model, EC-CDR, and IntellectReq framework which consists of Mis-Recommendation Detector (MRD) and Distribution Mapper (DM) to achieve high revenue under any requested budget.We first introduce the EC-CDR, and then present IntellectReq, which we propose to overcome the frequent and low-revenue drawbacks of EC-CDR requests. IntellectReq achieves high communication revenue under any edge-cloud communication budget in EC-CDR. MRD can determine whether to request parameters from the cloud model Mg or to use the edge recommendation model Md based on real-time data SR(i). DM helps MRD make further judgments by discriminating the uncertainty in the recommendation model's understanding of data semantics.3.2.1subsubsection3.2.13.2.1§3.2.13.2.1The framework of EC-CDR3.2.1The framework of EC-CDRIn EC-CDR, a recommendation model with a static layers and a dynamic layers will be trained for the global cloud model development. The goal of the EC-CDR can thus be formulated as the following optimization problem:(3)Equation33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),where Dce(⋅;Θgb) denotes the cross-entropy between two probability distributions, frec(⋅) denotes the dynamic layers of the recommendation model, Ω(x(j)H(i);Θgb) is the static layers extracting features from x(j)H(i). EC-CDR is decoupled edge-model with a ``static layers'' and ``dynamic layers'' training scheme to achieve better personalization.The primary factor enhancing the on-edge model's generalization to real-time data through EC-CDR is its dynamic layers. Upon completion of training, the static layers' parameters remain static, denoted as Θgb, as determined by Eq.3. Conversely, the dynamic layers' parameters, represented by Θgc, are dynamically generated based on real-time data by the cloud generator.In edge inference, the cloud-based parameter generator uses the real-time click sequence ∈s(j,t)R(i)SR(i) to generate the parameters,(4)Equation44=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,where Eshare(⋅) represents the shared encoder. L(n)layer(⋅) is a linear layer used to adjust e(j,t)R(i) which is the output of Eshare(⋅) to the nth dynamic layer features. e(j,t)R(i) means embedding vector generated by the click sequence at the moment t.The cloud generator model treats the parameters of a fully-connected layer as a matrix ∈K(n)R×NinNout, where Nin and Nout represent the number of input neurons and output neurons of the nth fully-connected layers, respectively.Then the cloud generator model g(⋅) converts the real-time click sequence s(j,t)R(i) into dynamic layers parameters ^Θgc by =K(n)R(i)g(n)(e(n)R(i)). Since the following content no longer needs the superscript (n), we simplify g(⋅) to =g(⋅)L(n)layer(Eshared(⋅)). Then, the edge recommendation model updates the parameters and makes inference as follows,(5)Equation55=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).Figure 4Figure44Figure 44Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.Figure 4Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.In cloud training, all layers of the cloud generator model are optimized together with the static layers of the primary model that are conditioned on the global history data=SH(i){x(j)H(i),y(j)H(i)}=j1NH(i), instead of optimizing the static layers of the primary model first and then optimizing the cloud generator model.The cloud generator model loss functionis defined as follows:(6)Equation66EC-CDR could improve the generalization ability of the edge recommendation model.However, EC-CDR could not be easily deployed in a real-world environment due to the high request frequency and low communication revenue. Under the EC-CDR framework, the moment t in Eq.5 is equal to the current moment T, which means that the edge and the cloud communicate at every moment.In fact, however, a lot of communication is unnecessary because ^Θgc generated by the sequence earlier may work well enough.To alleviate this issue, we propose MRD and DM to solve the problem when the edge recommendation model should update parameters.3.2.2subsubsection3.2.23.2.2§3.2.23.2.2Mis-Recommendation Detector3.2.2Mis-Recommendation DetectorThe training procedure of MRD can be divided into two stages.The goal of the first stage is to construct a MRD dataset SC based on the user's historical data without any additional annotation to train the MRD.The cloud model Mg and the edge model Md are trained in the same way as the training procedure of EC-CDR.(7)Equation77Here, we set t′≤t=T. That is, when generating model parameters, we use the click sequence s(j,t′)R(i) at the previous moment t′, but this model is used to predict the current data. Then we can get c(j,t,t′) that means whether the sample be correctly predicted based on the prediction ^y(j,t,t′)R(i) and the ground-truth y(j,t)R(i).(8)Equation88=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,(9)Equation99LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then we construct the new mis-recommendation training dataset as follows:=SMRD(i){s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T.Then, a dynamic layers fMRD(⋅) can be trained on SMRD(i) according to the Eq.9, where =tT and the loss function l(⋅) is cross entropy.3.2.3subsubsection3.2.33.2.3§3.2.33.2.3Distribution Mapper3.2.3Distribution MapperAlthough the MRD could determine when to update edge parameters, it is insufficient to simply map a click sequence to a certain representation in a high-dimensional space due to ubiquitous noises in click sequences. So we design the DM as Figure4 make it possible to directly perceive the data distribution shift and determine the uncertainty in the recommendation model's understanding of the semantics of the data.Inspired by Conditional-VAE, we map click sequences to normal distributions. Different from the MRD, the DM module consider a variable u(j,t) to denote the uncertainty in Eq.9 as:(10)Equation1010LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The uncertainty variable u(j,t) shows the recommendation model's understanding of the semantics of the data. DM focuses on how to learn such uncertainty variable u(j,t).Distribution Mapper consists of three components as shown in the figure in Appendix, namely the Prior Network P(⋅) (PRN), the Posterior Network Q(⋅) (PON), and the Next-item Prediction Network f(⋅) (NPN) that includes the static layers Ω(⋅) and dynamic layers fNPN(⋅). Note that Ω(⋅) here is the same as Ω(⋅) in section3.2.1 and 3.2.2, so there is almost no additional resource consumption. We will first introduce the three components separately, and then introduce the training procedure and inference procedure.Prior Network.The Prior Network with weights Θprior and Θ′prior maps the representation of a click sequence s(j,t) to a prior probability distribution. We set this prior probability distribution as a normal distribution with mean μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN and variance σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior Network.The Posterior Network Ωpost with weights Θpost and Θ′post can enhance the training of the Prior Network by introducing posterior information. It maps the representation concatenated by the representation of the next-item r(j,t) and of the click sequence s(j,t) to a normal distribution.we define the posterior probability distribution as a normal distribution with mean μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN and variance σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item Prediction Network.The Next-item Prediction Network with weights Θc predicts the embedding of the next item ^r(j,t) to be clicked based on the user's click sequence s(j,t) as follows,(13)Equation1313=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training Procedure.In the training procedure, two losses need to be constructed, one is recommendation prediction loss Lrec and the other is distribution difference loss Ldist.Like the way that most recommendation models are trained, Lrec uses the binary cross-entropy loss function l(⋅) to penalize the difference between ^y(j,t) and y(j,t). The difference is that here NPN uses the feature z sampled from the prior distribution Q to replace e in formula 5In addition, Ldist penalizes the difference between the posterior distribution Q and the prior distribution P with the help of the Kullback-Leibler divergence.Ldist "pulls" the posterior and prior distributions towards each other. The formulas for Lrec and Ldist are as follows,(14)Equation1414=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],(15)Equation1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, we optimize DM according to,(16)Equation1616=L(y(j,t),s(j,t))+Lrec⋅βLdist.=L(y(j,t),s(j,t))+Lrec⋅βLdist.During training, the weights are randomly initialized.Inference Procedure. In the inference procedure, the posterior network will be removed from DM because there is no posterior information during the inference procedure. Uncertainty variable u(j,t) is calculated by the multi-sampling outputs as follows:(17)Equation1717=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),where n denotes the sampling times. Specifically, we consider the dimension of ^r(j,t) is ×N1, ^ri(j,t),(k) as the k-th value of the ^ri(j,t) vector, and calculate the variance as follows:(18)Equation1818var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).3.2.4subsubsection3.2.43.2.4§3.2.43.2.4On-edge Model Update3.2.4On-edge Model UpdateMis-Recommendation Score(MRS) is a variable calculated based on the output of MRD and DM, which directly affects whether the model needs to be updated.(19)Equation1919=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation2020=Update1(≤MRSThreshold)=Update1(≤MRSThreshold)In the equation above, 1(⋅) is the indicator function.To get the threshold, we need to collect user data for a period of time, then get the MRS values corresponding to these data on the cloud and sort them, and then set the threshold according to the load of the cloud server. For example, if the load of the cloud server needs to be reduced by 90%, that is, when the load is only 10% of the previous value, only the minimum 10% position value needs to be sent to each edge as the threshold. During inference, each edge determines whether it needs to update the edge model based on equation19 and 20, that is, whether it needs to request new parameters.4section44§44Experiments4ExperimentsWe conducted extensive experiments to evaluate the effectiveness and generalizability of the proposedIntellectReq. We put part of the experimental setup, results and analysis in the Appendix.4.1subsection4.14.1§4.14.1Experimental Setup.4.1Experimental Setup.Datasets. We evaluate on Amazon CDs(CDs), Amazon Electronic(Electronic), Douban Book(Book),three widely used public benchmarks in the recommendation tasks.Evaluation MetricsIn the experiments, we use the widely adopted AUC1footnote11footnote 1Note 0.1% absolute AUC gain is regarded as significant for the CTR task(Yan etal., 2022b; Lv etal., 2023b; Kang and McAuley, 2018; Zhou etal., 2018), UAUC1, HitRate and NDCG as the metrics.Baselines.To verify the applicability, the following representative sequential modeling approaches are implemented and compared with the counterparts combined with the proposed method.DUET(Lv etal., 2023b) and APG(Yan etal., 2022b) are SOTA of EC-CDR, which generate parameters through the edge-cloud collaboration for different tasks. With the cloud generator model, the on-edge model could generalize well to the current data distribution in each session without training on the edge. GRU4Rec(Hidasi etal., 2016), DIN(Zhou etal., 2018), and SASRec(Kang and McAuley, 2018) are three of the most widely used sequential recommendation methods in the academia and industry, which respectively introduce GRU, Attention, and Self-Attention into the recommendation system. LOF(Breunig etal., 2000) and OC-SVM(Tax, 2002) estimate the density of a given point via the ratio of the local reachability of its neighbors and itself. They can be used to detect changes in the distribution of click sequences. For the IntellectReq, we consider SASRec as edge-model unless otherwise stated, but note that IntellectReq broadly applies to lots of sequential recommendation model such as DIN, GRU4Rec, etc.Evaluation Metrics.We use the widely adopted AUC, HitRate, and NDCG as the metrics to evaluate model performance.4.2subsection4.24.2§4.24.2Experimental Results.4.2Experimental Results.4.2.1subsubsection4.2.14.2.1§4.2.14.2.1Quantitative Results.4.2.1Quantitative Results.Figure 5Figure55Figure 55Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 5Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 6Figure66Figure 66Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 6Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 7Figure77Figure 77Performance w.r.t. Request Frequency based on on-edge static model.Figure 7Performance w.r.t. Request Frequency based on on-edge static model.Figure5, 6, and 7 summarize the quantitative results of our framework and other methods on CDs and Electronic datasets.The experiments are based on state-of-the-art EC-CDR frameworks such as DUET and APG. As shown in Figure5-6, we combine the parameter generation framework with three sequential recommendation models, DIN, GRU4Rec, SASRec. We evaluate these methods with AUC and UAUC metrics on CDs and Book datasets.We have the following findings:(1) If all edge-model updated at -t1 moment, the DUET framework (DUET) and the APG framework (APG) can be viewed as the upper bound of performance for all methods since DUET and APG are evaluated with fixed 100% request frequency and other methods are evaluated with increasing frequency. If all edge-model are the same as the cloud pretrained model, IntellectReq can even beat DUET, which indicates that in EC-CDR, not all edges need to be updated at every moment. In fact, model parameters generated by user data at some moments can be detrimental to performance.Note that directly comparing the other methods with DUET and APG is not fair as DUET and APG use the fixed 100% request frequency, which could not be deployed in lower request frequency.(2) The random request method (DUET (Random), APG (Random)) works well with any request budget. However, it does not give the optimal request scheme for any request budget in most cases (such as Row.1). The correlation between its performance and Request Frequency tends to be linear.The performances of random request methods are unstable and unpredictable, where these methods outperform other methods in a few cases.(3) LOF (DUET (LOF), APG (LOF)) and OC-SVM (DUET (OC-SVM), APG (OC-SVM)) are two methods that could be used as simple baselines to make the optimal request scheme under a special and specific request budget.However, they have two weaknesses. One is that they consume a lot of resources and thus significantly reduce the calculation speed. The other is they can only work under a specific request budget instead of an arbitrary request budget. For example, in the first line, the Request Frequency of OC-SVM can only be(4) In most cases, our IntellectReq can make the optimal request scheme under any request budget.4.2.2subsubsection4.2.24.2.2§4.2.24.2.2Mis-recommendation score and profit.4.2.2Mis-recommendation score and profit.Figure 8Figure88Figure 88Mis-Recommendation Score and Revenue.Figure 8Mis-Recommendation Score and Revenue.To further study the effectiveness of MDR, we visualize the request timing and revenue in Figure8.As shown in Figure8, we analyze the relationship between request and revenue.Every 100 users were assigned to one of 15 groups, which were selected at random. The Figure is divided into three parts, with the first part used to assess the request and the second and third parts used to assess the benefit.The metric used here is Mis-Recommendation Score (MRS) to evaluate the request revenue. MRS is a metric to measure whether a recommendation will be made in error.In other words, it can be viewed as an evaluation of the model's generalization ability.The probabilities of a mis-recommendation and requesting model parameters are higher and the score is lower.•item1st itemIntellectReq predicts the MRS based on the uncertainty and the click sequences at the moment t and -t1.•item2nd itemDUET (Random) randomly selects edges to request the cloud model to update the parameters of the edges. At this point, MRS can be considered as an arbitrary constant. We take the average value of IntellectReq's MRS as the MRS value.•item3rd itemDUET (w. Request) represents all edge-model be updated at the moment t.•item4th itemDUET (w/o. Request) represents no edge-model be updated at moment -t1 in Figure5 and 6, represents no edge-model be updated at moment 0 in Figure7.•item5th itemRequest Revenue represents the revenue, that is, DUET (w. Request) curve minus DUET (w/o Request).From Figure8, we have the following observations:(1) The trends of MRS and DUET Revenue are typically in the opposite direction, which means that when the MRS value is low, IntellectReq tends to believe that the edge's model cannot generalize well to the current data distribution. Then, the IntellectReq uses the most recent real-time data to request model parameters. As a result, the revenue at this time is frequently positive and relatively high. When the MRS value is high, IntellectReq tends to continue using the model that was updated at the previous moment -t1 instead of t because it believes that the model on the edge can generalize well to the current data distribution. The revenue is frequently low and negative if the model parameters are requested at this point.(2) Since the MRS of DUET (Random) is constant, it cannot predict the revenue of each request. The performance curve changes randomly because of the irregular arrangement order of groups.4.2.3subsubsection4.2.34.2.3§4.2.34.2.3Ablation Study.4.2.3Ablation Study.Figure 9Figure99Figure 99Ablation study on model architecture.Figure 9Ablation study on model architecture.We conducted an ablation study to show the effectiveness of different components in IntellectReq. The results are shown in Figure9.We use w/o. and w. to denote without and with, respectively. From the table, we have the following findings:•item1st itemIntellectReq means both DM and MRD are used.•item2nd item(w/o. DM) means MRD is used but DM is not used.•item3rd item(w/o. MRD) means DM is used but MRD is not used.From the figure and table, we have the following observations:(1) Generally, IntellectReq achieves the best performance with different evaluation metrics in most cases, demonstrating the effectiveness of IntellectReq.(2) When the request frequency is small, the difference between IntellectReq and IntellectReq (w/o. DM) is not immediately apparent, as shown in Fig.9(d). The difference becomes more noticeable when the Request Frequency increases within a certain range. In brief, the difference exhibits the traits of first getting smaller, then larger, and finally smaller.4.2.4subsubsection4.2.44.2.4§4.2.44.2.4Time and Space Cost.4.2.4Time and Space Cost.Most edges have limited storage space, so the on-edge model must be small and sufficient.The edge's computing power is rather limited, and the completion of the recommendation task on the edge requires lots of real-time processing, so the model deployed on the edge must be both simple and fast. Therefore, we analyze whether these methods are controllable and highly profitable based on the DUET framework, and additional time and space resource consumption under this framework is shown in Table1.Table 1Table11Table 11Extra Time and Space Cost on CDs dataset.Table 1Extra Time and Space Cost on CDs dataset.MethodControllableProfitableTime CostSpace Cost (Param.)LOF✗✓/225s11.3ms≈0OC-SVM✗✓/160s9.7ms≈0Random✓✗/0s0.8ms≈0IntellectReq✓✓/11s7.9ms≈5.06kIn the time consumption column, signal ``/'' separates the time consumption of cloud preprocessing and edge inference. Cloud preprocessing means that the cloud server first calculates the MRS value based on recent user data and then determines the threshold based on the communication budget of the cloud server and sends it to the edge. Edge inference refers to the MRS calculated when the click sequence on the edge is updated. The experimental results show that: 1) In terms of time consumption, both cloud preprocessing and edge inference are the fastest for random requests, followed by our IntellectReq. LOF and OC-SVM are the slowest. 2) In terms of space consumption, random, LOF, and OC-SVM can all be regarded as requiring no additional space consumption. In contrast, our method requires the additional deployment of 5.06k parameters on the edge. 3) Random and our IntellectReq can be realized in terms of controllability. It means that edge-cloud communication can be realized under the condition of an arbitrary communication budget, while LOF and OC-SVM cannot. 4) In terms of high yield, LOF, OC-SVM, and our IntellectReq can all be achieved, but random requests cannot.In general, our IntellectReq only requires minimal time consumption (does not affect real-time performance) and space consumption (easy to deploy for smart edges) and can take into account controllability and high profitability.5section55§55Conclusion5ConclusionIn our paper, we argue that under the EC-CDR framework, most communications requesting new parameters for the cloud-based recommendation system are unnecessary due to stable on-edge data distributions. We introduced IntellectReq, a low-resource solution for calculating request value and ensuring adaptive, high-revenue edge-cloud communication. IntellectReq employs a novel edge intelligence task to identify out-of-domain data and uses real-time user behavior mapping to a normal distribution, alongside multi-sampling outputs, to assess the edge model's adaptability to user actions. Our extensive tests across three public benchmarks confirm IntellectReq's efficiency and broad applicability, promoting a more effective edge-cloud collaborative recommendation approach.ACKNOWLEDGMENTThis work was supported by National Key R&D Program of China (No. 2022ZD0119100), Scientific Research Fund of Zhejiang Provincial Education Department (Y202353679), National Natural Science Foundation of China (No. 62376243, 62037001, U20A20387), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Project by Shanghai AI Laboratory (P22KS00111) and Program of Zhejiang Province Science and Technology (2022C01044)References1(1) 22000Breunig etal.Breunig, Kriegel, Ng, and SanderBreunig etal. (2000)ref:lofMarkusM Breunig, Hans-Peter Kriegel, RaymondT Ng, and Jörg Sander. 2000.LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data. 93–104.32020Cai etal.Cai, Gan, Zhu, and HanCai etal. (2020)ref:finetuningHan Cai, Chuang Gan, Ligeng Zhu, and Song Han. 2020.Tinytl: Reduce activations, not trainable parameters for efficient on-device learning.(2020).42023Cao etal.Cao, Zheng, Hassanzadeh, Lamba, Liu, and LiuCao etal. (2023)cao2023_10.1145/3604237.3626868Defu Cao, Yixiang Zheng, Parisa Hassanzadeh, Simran Lamba, Xiaomo Liu, and Yan Liu. 2023.Large Scale Financial Time Series Forecasting with Multi-faceted Model. In Proceedings of the Fourth ACM International Conference on AI in Finance (<conf-loc>, <city>Brooklyn</city>, <state>NY</state>, <country>USA</country>, </conf-loc>) (ICAIF '23). Association for Computing Machinery, New York, NY, USA, 472–480.https://doi.org/10.1145/3604237.362686852021Chang etal.Chang, Gao, Zheng, Hui, Niu, Song, Jin, and LiChang etal. (2021)ref:surgeJianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021.Sequential recommendation with graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 378–387.62021Chen and WangChen and WangChen and Wang (2021)chen2021multiZhengyu Chen and Donglin Wang. 2021.Multi-Initialization Meta-Learning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1390–1394.72022Chen etal.Chen, Xiao, and KuangChen etal. (2022)chen2022baZhengyu Chen, Teng Xiao, and Kun Kuang. 2022.BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 3012–3024.82023Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2023)chen2023learning_arxivZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2023.Learning to Reweight for Graph Neural Network.arXiv preprint arXiv:2312.12475 (2023).92024Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2024)chen2023learningZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2024.Learning to Reweight for Generalizable Graph Neural Network.Proceedings of the AAAI conference on artificial intelligence (2024).102021Chen etal.Chen, Xu, and WangChen etal. (2021)chen2021deepZhengyu Chen, Ziqing Xu, and Donglin Wang. 2021.Deep transfer tensor decomposition with orthogonal constraint for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol.35. 4010–4018.112017Ha etal.Ha, Dai, and LeHa etal. (2017)ref:hypernetwork_pioneering1David Ha, Andrew Dai, and QuocV Le. 2017.Hypernetworks.(2017).122016Hidasi etal.Hidasi, Karatzoglou, Baltrunas, and TikkHidasi etal. (2016)ref:gru4recBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016.Session-based recommendations with recurrent neural networks.International Conference on Learning Representations 2016 (2016).132023Huang etal.Huang, Huang, Yang, Ren, Liu, Li, Ye, Liu, Yin, and ZhaoHuang etal. (2023)huang2023makeRongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023.Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.arXiv preprint arXiv:2301.12661 (2023).142022aHuang etal.Huang, Lam, Wang, Su, Yu, Ren, and ZhaoHuang etal. (2022a)DBLP:conf/ijcai/HuangL0S00Z22Rongjie Huang, Max W.Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022a.FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. In IJCAI. ijcai.org, 4157–4163.152022bHuang etal.Huang, Ren, Liu, Cui, and ZhaoHuang etal. (2022b)huang2022generspeechRongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2022b.Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech.Advances in Neural Information Processing Systems 35 (2022), 10970–10983.162023aJi etal.Ji, Liang, Liao, Fei, and FengJi etal. (2023a)ji2023partialWei Ji, Renjie Liang, Lizi Liao, Hao Fei, and Fuli Feng. 2023a.Partial Annotation-based Video Moment Retrieval via Iterative Learning. In Proceedings of the 31th ACM international conference on Multimedia.172023bJi etal.Ji, Liu, Zhang, Wei, and WangJi etal. (2023b)ji2023onlineWei Ji, Xiangyan Liu, An Zhang, Yinwei Wei, and Xiang Wang. 2023b.Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation. In Proceedings of the 31th ACM international conference on Multimedia.182018Kang and McAuleyKang and McAuleyKang and McAuley (2018)ref:sasrecWang-Cheng Kang and Julian McAuley. 2018.Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 197–206.192021Latifi etal.Latifi, Mauro, and JannachLatifi etal. (2021)latifi2021sessionSara Latifi, Noemi Mauro, and Dietmar Jannach. 2021.Session-aware recommendation: A surprising quest for the state-of-the-art.Information Sciences 573 (2021), 291–315.202023eLi etal.Li, Xiao, Zheng, Wu, and CuiLi etal. (2023e)li2023propensityHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, and Peng Cui. 2023e.Propensity matters: Measuring and enhancing balancing for recommendation. In International Conference on Machine Learning. PMLR, 20182–20194.212024Li etal.Li, Xiao, Zheng, Wu, Geng, Chen, and CuiLi etal. (2024)li2024kernelHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, Zhi Geng, Xu Chen, and Peng Cui. 2024.Debiased Collaborative Filtering with Kernel-based Causal Balancing. In International Conference on Learning Representations.222022aLi etal.Li, He, Wei, Qian, Zhu, Xie, Zhuang, Tian, and TangLi etal. (2022a)li2022fineJuncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie, Yueting Zhuang, Qi Tian, and Siliang Tang. 2022a.Fine-grained semantically aligned vision-language pre-training.Advances in neural information processing systems 35 (2022), 7290–7303.232023aLi etal.Li, Pan, Ge, Gao, Zhang, Ji, Zhang, Chua, Tang, and ZhuangLi etal. (2023a)li2023finetuningJuncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, and Yueting Zhuang. 2023a.Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions.arXiv preprint arXiv:2308.04152 (2023).242023bLi etal.Li, Wang, Qin, Ji, and LiangLi etal. (2023b)lili_10.1145/3581783.3611847Li Li, Chenwei Wang, You Qin, Wei Ji, and Renjie Liang. 2023b.Biased-Predicate Annotation Identification via Unbiased Visual Predicate Representation. In Proceedings of the 31st ACM International Conference on Multimedia (<conf-loc>, <city>Ottawa ON</city>, <country>Canada</country>, </conf-loc>) (MM '23). Association for Computing Machinery, New York, NY, USA, 4410–4420.https://doi.org/10.1145/3581783.3611847252023dLi etal.Li, Wang, Zhang, Miao, Zhao, Zhang, Ji, and WuLi etal. (2023d)li2023winnerMengze Li, Han Wang, Wenqiao Zhang, Jiaxu Miao, Zhou Zhao, Shengyu Zhang, Wei Ji, and Fei Wu. 2023d.Winner: Weakly-supervised hierarchical decomposition and alignment for spatio-temporal video grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23090–23099.262023cLi etal.Li, Wang, Xu, Han, Zhang, Zhao, Miao, Zhang, Pu, and WuLi etal. (2023c)li2023multiMengze Li, Tianbao Wang, Jiahe Xu, Kairong Han, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Shiliang Pu, and Fei Wu. 2023c.Multi-modal Action Chain Abductive Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 4617–4628.272022bLi etal.Li, Wang, Zhang, Zhang, Zhao, Miao, Zhang, Tan, Wang, Wang, etal.Li etal. (2022b)li2022endMengze Li, Tianbao Wang, Haoyu Zhang, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Wenming Tan, Jin Wang, Peng Wang, etal. 2022b.End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 8707–8717.282023Lin etal.Lin, Xu, Wang, Zhang, and FengLin etal. (2023)lin2023mitigatingXin-Yu Lin, Yi-Yan Xu, Wen-Jie Wang, Yang Zhang, and Fu-Li Feng. 2023.Mitigating Spurious Correlations for Self-supervised Recommendation.Machine Intelligence Research 20, 2 (2023), 263–275.292022Lv etal.Lv, Wang, Zhang, Kuang, Yang, and WuLv etal. (2022)lv2022personalizingZheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022.Personalizing Intervened Network for Long-tailed Sequential User Behavior Modeling.arXiv preprint arXiv:2208.09130 (2022).302023aLv etal.Lv, Wang, Zhang, Zhang, Kuang, and WuLv etal. (2023a)lv2023parametersZheqi Lv, Feng Wang, Shengyu Zhang, Wenqiao Zhang, Kun Kuang, and Fei Wu. 2023a.Parameters Efficient Fine-Tuning for Long-Tailed Sequential Recommendation. In CAAI International Conference on Artificial Intelligence. Springer, 442–459.312023bLv etal.Lv, Zhang, Zhang, Kuang, Wang, Wang, Chen, Shen, Yang, Ooi, and WuLv etal. (2023b)ref:duetZheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, BengChin Ooi, and Fei Wu. 2023b.DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization. In Proceedings of the ACM Web Conference 2023.322021Marfoq etal.Marfoq, Neglia, Bellet, Kameni, and VidalMarfoq etal. (2021)ref:federated_multi_task2Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. 2021.Federated multi-task learning under a mixture of distributions.Advances in Neural Information Processing Systems 34 (2021), 15434–15447.332017McMahan etal.McMahan, Moore, Ramage, Hampson, and yArcasMcMahan etal. (2017)ref:federated_fedavgBrendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and BlaiseAguera y Arcas. 2017.Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.342021Mills etal.Mills, Hu, and MinMills etal. (2021)ref:federated_multi_taskJed Mills, Jia Hu, and Geyong Min. 2021.Multi-task federated learning for personalised deep neural networks in edge computing.IEEE Transactions on Parallel and Distributed Systems 33, 3 (2021), 630–641.352022Qian etal.Qian, Xu, Lv, Zhang, Jiang, Liu, Zeng, Chua, and WuQian etal. (2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022Xufeng Qian, Yue Xu, Fuyu Lv, Shengyu Zhang, Ziwen Jiang, Qingwen Liu, Xiaoyi Zeng, Tat-Seng Chua, and Fei Wu. 2022.Intelligent Request Strategy Design in Recommender System. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 3772–3782.362020Qin etal.Qin, Lv, Wang, Hu, and WuQin etal. (2020)qin2020healthFang-Yu Qin, Zhe-Qi Lv, Dan-Ni Wang, Bo Hu, and Chao Wu. 2020.Health status prediction for the elderly based on machine learning.Archives of gerontology and geriatrics 90 (2020), 104121.372010Rendle etal.Rendle, Freudenthaler, and Schmidt-ThiemeRendle etal. (2010)ref:fpmcSteffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010.Factorizing personalized Markov chains for next-basket recommendation.the web conference (2010).382019Sanh etal.Sanh, Debut, Chaumond, and WolfSanh etal. (2019)ref:disitllVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019.DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.arXiv preprint arXiv:1910.01108 (2019).392023aSu etal.Su, Chen, Lin, Li, Liu, and ZhengSu etal. (2023a)su2023personalizedJiajie Su, Chaochao Chen, Zibin Lin, Xi Li, Weiming Liu, and Xiaolin Zheng. 2023a.Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation. In Proceedings of the 31st ACM International Conference on Multimedia. 6321–6331.402023bSu etal.Su, Chen, Liu, Wu, Zheng, and LyuSu etal. (2023b)su2023enhancingJiajie Su, Chaochao Chen, Weiming Liu, Fei Wu, Xiaolin Zheng, and Haoming Lyu. 2023b.Enhancing Hierarchy-Aware Graph Networks with Deep Dual Clustering for Session-based Recommendation. In Proceedings of the ACM Web Conference 2023. 165–176.412019Sun etal.Sun, Liu, Wu, Pei, Lin, Ou, and JiangSun etal. (2019)ref:bert4recFei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019.BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.422024aTang etal.Tang, Lv, Zhang, Wu, and KuangTang etal. (2024a)tang2024modelgptZihao Tang, Zheqi Lv, Shengyu Zhang, Fei Wu, and Kun Kuang. 2024a.ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation.arXiv preprint arXiv:2402.12408 (2024).432024bTang etal.Tang, Lv, Zhang, Zhou, Duan, Kuang, and WuTang etal. (2024b)tang2024oodkdZihao Tang, Zheqi Lv, Shengyu Zhang, Yifan Zhou, Xinyu Duan, Kun Kuang, and Fei Wu. 2024b.AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation. In 12th International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net.https://openreview.net/forum?id=fcqWJ8JgMR442002TaxTaxTax (2002)ref:ocsvmDavid MartinusJohannes Tax. 2002.One-class classification: Concept learning in the absence of counter-examples.(2002).452023Tong etal.Tong, Yuan, Zhang, Zhu, Zhang, Wu, and KuangTong etal. (2023)DBLP:conf/kdd/TongYZZZWK23Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, and Kun Kuang. 2023.Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization. In KDD. ACM, 2189–2200.462017Wang etal.Wang, Cui, Wang, Pei, Zhu, and YangWang etal. (2017)wang2017communityXiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017.Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, Vol.31.472019Wu etal.Wu, Tang, Zhu, Wang, Xie, and TanWu etal. (2019)ref:srgnnShu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019.Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol.33. 346–353.482023aWu etal.Wu, Lu, Zhang, Jatowt, Feng, Sun, Wu, and KuangWu etal. (2023a)wu2023focusYiquan Wu, Weiming Lu, Yating Zhang, Adam Jatowt, Jun Feng, Changlong Sun, Fei Wu, and Kun Kuang. 2023a.Focus-aware response generation in inquiry conversation. In Findings of the Association for Computational Linguistics: ACL 2023. 12585–12599.492023bWu etal.Wu, Zhou, Liu, Lu, Liu, Zhang, Sun, Wu, and KuangWu etal. (2023b)wu2023precedentYiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b.Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration.arXiv preprint arXiv:2310.09241 (2023).502024XinyuLin and ChuaXinyuLin and ChuaXinyuLin and Chua (2024)lin2023temporallyJujia Zhao Yongqi Li FuliFeng XinyuLin, WenjieWang and Tat-Seng Chua. 2024.Temporally and Distributionally Robust Optimization for Cold-start Recommendation. In AAAI.512022bYan etal.Yan, Wang, Zhang, Li, Xu, and ZhengYan etal. (2022b)ref:apg_rs1Bencheng Yan, Pengjie Wang, Kai Zhang, Feng Li, Jian Xu, and Bo Zheng. 2022b.APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction. In Advances in Neural Information Processing Systems.522022aYan etal.Yan, Niu, Gu, Wu, Tang, Hua, Lyu, and ChenYan etal. (2022a)ref:edge_cloud2Yikai Yan, Chaoyue Niu, Renjie Gu, Fan Wu, Shaojie Tang, Lifeng Hua, Chengfei Lyu, and Guihai Chen. 2022a.On-Device Learning for Model Personalization with Large-Scale Cloud-Coordinated Domain Adaption. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 2180–2190.532022aYao etal.Yao, Wang, Ding, Chen, Han, Zhou, and YangYao etal. (2022a)ref:edge_cloudJiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, and Hongxia Yang. 2022a.Device-cloud Collaborative Recommendation via Meta Controller. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 4353–4362.542022bYao etal.Yao, Zhang, Yao, Wang, Ma, Zhang, Chu, Ji, Jia, Shen, etal.Yao etal. (2022b)ref:edge_cloud_surveyJiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, etal. 2022b.Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI.IEEE Transactions on Knowledge and Data Engineering (2022).552022aZhang etal.Zhang, Kuang, Chen, Liu, Wu, and XiaoZhang etal. (2022a)zhang2022fairnessFengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. 2022a.Fairness-aware contrastive learning with partially annotated sensitive attributes. In The Eleventh International Conference on Learning Representations.562023bZhang etal.Zhang, Kuang, Chen, You, Shen, Xiao, Zhang, Wu, Wu, Zhuang, etal.Zhang etal. (2023b)zhang2023federatedFengda Zhang, Kun Kuang, Long Chen, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Fei Wu, Yueting Zhuang, etal. 2023b.Federated unsupervised representation learning.Frontiers of Information Technology & Electronic Engineering 24, 8 (2023), 1181–1193.572023aZhang etal.Zhang, Feng, Kuang, Zhang, Zhao, Yang, Chua, and WuZhang etal. (2023a)zhangsy2023personalizedShengyu Zhang, Fuli Feng, Kun Kuang, Wenqiao Zhang, Zhou Zhao, Hongxia Yang, Tat-Seng Chua, and Fei Wu. 2023a.Personalized Latent Structure Learning for Recommendation.IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).582020Zhang etal.Zhang, Jiang, Wang, Kuang, Zhao, Zhu, Yu, Yang, and WuZhang etal. (2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. 2020.DeVLBert: Learning Deconfounded Visio-Linguistic Representations. In MM '20: The 28th ACM International Conference on Multimedia. ACM, 4373–4382.592023cZhang etal.Zhang, Liu, Zeng, Ooi, Tang, and ZhuangZhang etal. (2023c)zhang2023learningWenqiao Zhang, Changshuo Liu, Lingze Zeng, Bengchin Ooi, Siliang Tang, and Yueting Zhuang. 2023c.Learning in Imperfect Environment: Multi-Label Classification with Long-Tailed Distribution and Partial Labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1423–1432.602024Zhang and LvZhang and LvZhang and Lv (2024)zhang2024revisitingWenqiao Zhang and Zheqi Lv. 2024.Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.612021Zhang etal.Zhang, Shi, Guo, Zhang, Cai, Li, Luo, and ZhuangZhang etal. (2021)zhang2021magicWenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, and Yueting Zhuang. 2021.MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning.arXiv preprint arXiv:2112.06558 (2021).622022bZhang etal.Zhang, Zhu, Hallinan, Zhang, Makmur, Cai, and OoiZhang etal. (2022b)zhang2022boostmisWenqiao Zhang, Lei Zhu, James Hallinan, Shengyu Zhang, Andrew Makmur, Qingpeng Cai, and BengChin Ooi. 2022b.Boostmis: Boosting medical image semi-supervised learning with adaptive pseudo labeling and informative active annotation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20666–20676.632024Zhang etal.Zhang, Zhu, Song, Koniusz, King, etal.Zhang etal. (2024)zhang2024mitigatingYifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, Irwin King, etal. 2024.Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective.Advances in Neural Information Processing Systems 36 (2024).642018Zhou etal.Zhou, Zhu, Song, Fan, Zhu, Ma, Yan, Jin, Li, and GaiZhou etal. (2018)ref:dinGuorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018.Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1059–1068.652023aZhu etal.Zhu, Li, Shao, Hao, Wu, Kuang, Xiao, and WuZhu etal. (2023a)DBLP:conf/mm/ZhuL0HWK0W23Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, and Chao Wu. 2023a.Generalized Universal Domain Adaptation with Generative Flow Networks. In ACM Multimedia. ACM, 8304–8315.662023bZhu etal.Zhu, Li, Yuan, Li, Kuang, and WuZhu etal. (2023b)zhu2023universalDidi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, and Chao Wu. 2023b.Universal domain adaptation via compressive attention matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6974–6985.Appendix AAppendixAAAppendix AAppendix AAppendixAAppendixThis is the Appendix for ``Intelligent Model Update Strategy for Sequential Recommendation''.A.1subsectionA.1A.1§A.1A.1Supplementary MethodA.1Supplementary MethodA.1.1subsubsectionA.1.1A.1.1§A.1.1A.1.1Notations and DefinitionsA.1.1Notations and DefinitionsWe summarize notations and definitions in the Table2.Table 2Table22Table 22Notations and DefinitionsTable 2Notations and DefinitionsNotationDefinitionuUservItemsBehavior sequencedEdge=D{d(i)}=i1NdSet of edgesSH(i), SR(i), SMRDHistory samples, Real-time samples, MRD samplesNd, NH(i) and NR(i)The number of edges, The number of history data, The number of real-time dataΘg, Θd, ΘMRDParameters of the global cloud model, Parameters of the local edge modelMg(⋅;Θg), Md(i)(⋅;Θd(i)), Mc(i)t(SMRD;ΘMRD)Global cloud model, Local edge recommendation model, Local edge control modelLrec, LMRDLoss function of recommendation, Loss function of mis-recommendationΩFeature extractorA.1.2subsubsectionA.1.2A.1.2§A.1.2A.1.2Optimization TargetA.1.2Optimization TargetTo describe it in the simplest way, we assume that the set of the edges is =D{d(i)}=i1Nd, the set updated using the baseline method is =D′u{d(i)}=i1N′u, the set updated using our method is =Du{d(i)}=i1Nu. Nd, N′u, and Nu are the amount of the D, D′u and Du, respectively. The communication upper bound is set to Nthres. Suppose the ground-truth value y, and the prediction of the baseline models ^y′, and the prediction of our model ^y are row vectors.Therefore, our optimization target is to obtain the highest performance of the model while limiting the upper bound of the communication frequency.(21)Equation2121Maximize^yyT,Maximize^yyT,Subject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≤NuN′u,≤NuN′u,⊂DuD.⊂DuD.In this case, the improvement of our method is =Δ-^yyT^y′yT.Or it can also be regarded as reducing the communication frequency without degrading performance.(22)Equation2222MinimizeNuMinimizeNuSubject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≥^yyT^y′yT,≥^yyT^y′yT,⊂DuD⊂DuDIn this case, the improvement of our method is =Δ-NNu.A.2subsectionA.2A.2§A.2A.2Supplementary Experimental ResultsA.2Supplementary Experimental ResultsA.2.1subsubsectionA.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We evaluate IntellectReq and baselines on Amazon CDs(CDs)2footnote22footnote 2https://jmcauley.ucsd.edu/data/amazon/, Amazon Electronic(Electronic)2, Douban Book(Book)3footnote33footnote 3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, three widely used public benchmarks in the recommendation tasks, Table3 shows the statistics. Following conventional practice, all user-item pairs in the dataset are treated as positive samples. To conduct sequential recommendation experiments, we arrange the items clicked by the user into a sequence in the order of timestamps.We also refer to (Zhou etal., 2018; Kang and McAuley, 2018; Hidasi etal., 2016), which is negatively sampled at :14 and :199 in the training set and testing set, respectively. Negative sampling considers all user-item pairs that do not exist in the dataset as negative samples.Table 3Table33Table 33Statistics of Datasets.Table 3Statistics of Datasets.Amazon CDsAmazon ElectronicDouban Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsectionA.2.2A.2.2§A.2.2A.2.2Evaluation MetricsA.2.2Evaluation MetricsIn the experiments, we use the widely adopted AUC, Logloss, HitRate and NDCG as the metrics to evaluate model performance.They are defined by the following equations.(23)Equation2323=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,(24)Equation2424=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,(25)Equation2525=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),(26)Equation2626=HitRate@K1|U|∑∈uU1(≤Ru,guK),=HitRate@K1|U|∑∈uU1(≤Ru,guK),In the equation above, 1(⋅) is the indicator function. f is the model to be evaluated. Ru,gu is the rank predicted by the model for the ground truth item gu and user u. DT, DF is the positive and negative testing sample set, respectively, and DuT, DuF is the positive and negative testing sample set for user u respectively.A.2.3subsubsectionA.2.3A.2.3§A.2.3A.2.3Request Frequency and ThresholdA.2.3Request Frequency and ThresholdFigure10 shows that the relationship between request frequency and different threshold.Figure 10Figure1010Figure 1010Request frequency w.r.t. different thresholdFigure 10Request frequency w.r.t. different thresholdA.3subsectionA.3A.3§A.3A.3Training Procedure and Inference ProcedureA.3Training Procedure and Inference ProcedureIn this section, we describe the overall pipeline in detail in conjunction with Figure11.Figure 11Figure1111Figure 1111The overall pipeline of our proposed IntellectReq.Figure 11The overall pipeline of our proposed IntellectReq.1. Training Procedure① We first pre-trained a EC-CDR framework, and EC-CDR can use data to generate model parameters.② MRD training procedure. 1) Construct the MRD dataset. We assume that the time at this time is T, and then we use the model parameters generated by the data at moment =t0 under the EC-CDR framework, and the model is applied to the data at the current moment =tT. At this point, we can get a prediction result ^y, compare ^y with y to get whether the model do mis-recommendation. We then repeat the data used for parameter generation from =t0 to =t-T1, which constructs an MRD dataset. It contains three columns, namely: the data used for parameter generation (x1), the current data (x2), and whether it do mis-recommendation (yMRD). 2) Train MRD. MRD is a fully connected neural network that takes x1 and x2 as input and fits the mis-recommendation label yMRD. And then we get the MRD. MRD can be used to determine whether the model parameters generated using the data at a certain moment before are still valid for the current data. The prediction result output by MRD can be simply considered as Mis-Recommendation Score (MRS).③ DM training procedure. We map the data into a Gaussian distribution through the Conditional-VAE method, and then sample the feature vector from the distribution to complete the next-item prediction task, that is, to predict the item that the user will click next. Then we can get DM. DM can calculate multiple next-items by sampling from the distribution multiple times, which can be used to calculate Uncertainty.④ Joint training procedure of MRD and DM. We use a fully connected neural network, denoted as f(⋅), and use MRS and Uncertainty as input to fit yMRD in the MRD dataset, which is the Mis-Recommendation Label.2. Inference ProcedureThe MRS is calculated using all recent user data on the cloud, and the threshold of the MRS is determined according to the load. Then send this threshold to each edge. The edge has updated the model at a certain moment =tn,<nT before, and now whether it is necessary to continue to update the model at moment =tT, that is, whether the model is invalid for the current data distribution? We only need to input the MRD and Uncertainty calculated by the data at the moment =tn data and the data at the moment =tT into f(⋅) for determine. In fact, what we output is a invalid degree, which is a continuous value between 0 and 1. Whether to update the edge model depends on the threshold calculated on the cloud based on the load.A.4subsectionA.4A.4§A.4A.4Hyperparameters and Training SchedulesA.4Hyperparameters and Training SchedulesWe summarize the hyperparameters and training schedules of IntellectReq on the three datasets in Table4.Table 4Table44Table 44Hyperparameters and training schedules.Table 4Hyperparameters and training schedules.DatasetParametersSetting Amazon CDsAmazon ElectronicDouban Book GPUTesla A100OptimizerAdam Learning rate0.001 Batch size1024 Sequence length30 the Dimension of z1×64N32n10A.4.1subsubsectionA.4.1A.4.1§A.4.1A.4.1Impact on the Real World.A.4.1Impact on the Real World.A case based on a dynamic model from the previous moment is as follows. If it were based on a on-edge static model, the improvement would be much more significant.We found some more intuitive data and examples to show the challenge and IntellectReq's impact on the real world:Table 5Table55Table 55IntellectReq's Impact on Real World.Table 5IntellectReq's Impact on Real World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) We calculate the number of bytes and FLOPs required to update a parameter. Bytes: 48.5kB, FLOPs: 1.53M. That is, updating a model on the edge requires the transmission of 48.5kB data through edge-cloud communication, and consumes 1.53M computing power of the cloud model. (2) According to the report, Google processes 99,000 clicks per second, so it needs to transmit 48.5kB∗99k=4.69GB per second, and consume 1.53M∗99k=152.46G of computing power in the cloud server. Alibaba processes 1,150,000 clicks per second, so it needs to transmit 48.5kB∗1150k=53.19GB per second, and consume 1.53M∗1150k=1.68T of computing power in the cloud server. These are not the peak value yet. Obviously, such a huge loan and computing power consumption make it hard to update the model for edges every moment especially at peak times. (3) Sometimes, the distributed nature of clouds today may can afford the computational volume, since it can call enough servers to support edge-cloud collaboration. However, the huge resource consumption is impractical in real-scenario. Besides, according to our empirical study, our IntellectReq can bring 21.4% resource saving when the performance is the same using the APG framework. Under the DUET framework, IntellectReq can bring 16.6% resource saving when the performance is the same. Summing up, IntellectReq can save 19% resources on average, which is very helpful for cost control and can facilitate the EC-CDR development in practice. The following Table5 is the comparison between our method IntellectReq and EC-CDR in the amount of transmitted data and the computing power consumed on the cloud. (4) During the peak period, resources will be tight and cause freezes or even crashes. This is still in the case that EC-CDR has not been deployed yet, that is, the edge-cloud communication only performs the most basic user data transmission. Then, IntellectReq can achieve better performance than EC-CDR under any resource limit ϵ, or to achieve the performance that EC-CDR requires +ϵ%19 of resources to achieve.EC-CDR:⏟Mg({SH(i)}=i1Nd;Θg)GlobalCloudModel[Parameters]Data←----→⏟Md(i)(SR(i);Θd(i))LocalEdgeModel.𝑇𝑜𝑑𝑒𝑡𝑒𝑟𝑚𝑖𝑛𝑒𝑤ℎ𝑒𝑡ℎ𝑒𝑟𝑡𝑜𝑟𝑒𝑞𝑢𝑒𝑠𝑡𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠𝑓𝑟𝑜𝑚𝑡ℎ𝑒𝑐𝑙𝑜𝑢𝑑𝐼𝑛𝑡𝑒𝑙𝑙𝑒𝑐𝑡𝑅𝑒𝑞𝑢𝑠𝑒𝑠S__MRDtolearnaMis−RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC−CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation2626:IntellectReqControl→⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModel⏟(Mg[Parameters]Data←----→Md(i))-ECCDR.3.2subsection3.23.2§3.23.2IntellectReq3.2IntellectReqFigure3 is the overview of Recommendation model, EC-CDR, and IntellectReq framework which consists of Mis-Recommendation Detector (MRD) and Distribution Mapper (DM) to achieve high revenue under any requested budget.We first introduce the EC-CDR, and then present IntellectReq, which we propose to overcome the frequent and low-revenue drawbacks of EC-CDR requests. IntellectReq achieves high communication revenue under any edge-cloud communication budget in EC-CDR. MRD can determine whether to request parameters from the cloud model Mg or to use the edge recommendation model Md based on real-time data SR(i). DM helps MRD make further judgments by discriminating the uncertainty in the recommendation model's understanding of data semantics.3.2.1subsubsection3.2.13.2.1§3.2.13.2.1The framework of EC-CDR3.2.1The framework of EC-CDRIn EC-CDR, a recommendation model with a static layers and a dynamic layers will be trained for the global cloud model development. The goal of the EC-CDR can thus be formulated as the following optimization problem:(3)Equation33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑=i1Nd∑=j1NR(i)Dce(y(j)H(i),^y(j)H(i)),where Dce(⋅;Θgb) denotes the cross-entropy between two probability distributions, frec(⋅) denotes the dynamic layers of the recommendation model, Ω(x(j)H(i);Θgb) is the static layers extracting features from x(j)H(i). EC-CDR is decoupled edge-model with a ``static layers'' and ``dynamic layers'' training scheme to achieve better personalization.The primary factor enhancing the on-edge model's generalization to real-time data through EC-CDR is its dynamic layers. Upon completion of training, the static layers' parameters remain static, denoted as Θgb, as determined by Eq.3. Conversely, the dynamic layers' parameters, represented by Θgc, are dynamically generated based on real-time data by the cloud generator.In edge inference, the cloud-based parameter generator uses the real-time click sequence ∈s(j,t)R(i)SR(i) to generate the parameters,(4)Equation44=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,=h(n)R(i)L(n)layer(=e(j,t)R(i)Eshared(s(j,t)R(i))),=∀n1,⋯,Nl,where Eshare(⋅) represents the shared encoder. L(n)layer(⋅) is a linear layer used to adjust e(j,t)R(i) which is the output of Eshare(⋅) to the nth dynamic layer features. e(j,t)R(i) means embedding vector generated by the click sequence at the moment t.The cloud generator model treats the parameters of a fully-connected layer as a matrix ∈K(n)R×NinNout, where Nin and Nout represent the number of input neurons and output neurons of the nth fully-connected layers, respectively.Then the cloud generator model g(⋅) converts the real-time click sequence s(j,t)R(i) into dynamic layers parameters ^Θgc by =K(n)R(i)g(n)(e(n)R(i)). Since the following content no longer needs the superscript (n), we simplify g(⋅) to =g(⋅)L(n)layer(Eshared(⋅)). Then, the edge recommendation model updates the parameters and makes inference as follows,(5)Equation55=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).=^y(j,t)R(i)frec(=Ω(x(j,t)R(i);Θgb);^Θgcg(s(j,t)R(i);Θp)).Figure 4Figure44Figure 44Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.Figure 4Overview of the proposed Distribution Mapper.Training procedure: The architecture includes Recommendation Network, Prior Network, Posterior network and Next-item Perdition Network. Loss consists of the classification loss and the KL-Divergence loss. Inference procedure: The architecture includes Recommendation Network, Prior Network and Next-item Perdition Network. The uncertainty is calculated by the multi-sampling output.In cloud training, all layers of the cloud generator model are optimized together with the static layers of the primary model that are conditioned on the global history data=SH(i){x(j)H(i),y(j)H(i)}=j1NH(i), instead of optimizing the static layers of the primary model first and then optimizing the cloud generator model.The cloud generator model loss functionis defined as follows:(6)Equation66EC-CDR could improve the generalization ability of the edge recommendation model.However, EC-CDR could not be easily deployed in a real-world environment due to the high request frequency and low communication revenue. Under the EC-CDR framework, the moment t in Eq.5 is equal to the current moment T, which means that the edge and the cloud communicate at every moment.In fact, however, a lot of communication is unnecessary because ^Θgc generated by the sequence earlier may work well enough.To alleviate this issue, we propose MRD and DM to solve the problem when the edge recommendation model should update parameters.3.2.2subsubsection3.2.23.2.2§3.2.23.2.2Mis-Recommendation Detector3.2.2Mis-Recommendation DetectorThe training procedure of MRD can be divided into two stages.The goal of the first stage is to construct a MRD dataset SC based on the user's historical data without any additional annotation to train the MRD.The cloud model Mg and the edge model Md are trained in the same way as the training procedure of EC-CDR.(7)Equation77Here, we set t′≤t=T. That is, when generating model parameters, we use the click sequence s(j,t′)R(i) at the previous moment t′, but this model is used to predict the current data. Then we can get c(j,t,t′) that means whether the sample be correctly predicted based on the prediction ^y(j,t,t′)R(i) and the ground-truth y(j,t)R(i).(8)Equation88=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,=c(j,t,t′){=1,^y(j,t,t′)R(i)y(j,t)R(i);≠0,^y(j,t,t′)R(i)y(j,t)R(i).,(9)Equation99LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then we construct the new mis-recommendation training dataset as follows:=SMRD(i){s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T.Then, a dynamic layers fMRD(⋅) can be trained on SMRD(i) according to the Eq.9, where =tT and the loss function l(⋅) is cross entropy.3.2.3subsubsection3.2.33.2.3§3.2.33.2.3Distribution Mapper3.2.3Distribution MapperAlthough the MRD could determine when to update edge parameters, it is insufficient to simply map a click sequence to a certain representation in a high-dimensional space due to ubiquitous noises in click sequences. So we design the DM as Figure4 make it possible to directly perceive the data distribution shift and determine the uncertainty in the recommendation model's understanding of the semantics of the data.Inspired by Conditional-VAE, we map click sequences to normal distributions. Different from the MRD, the DM module consider a variable u(j,t) to denote the uncertainty in Eq.9 as:(10)Equation1010LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑=j1|SMRD(i)|∑=t′1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The uncertainty variable u(j,t) shows the recommendation model's understanding of the semantics of the data. DM focuses on how to learn such uncertainty variable u(j,t).Distribution Mapper consists of three components as shown in the figure in Appendix, namely the Prior Network P(⋅) (PRN), the Posterior Network Q(⋅) (PON), and the Next-item Prediction Network f(⋅) (NPN) that includes the static layers Ω(⋅) and dynamic layers fNPN(⋅). Note that Ω(⋅) here is the same as Ω(⋅) in section3.2.1 and 3.2.2, so there is almost no additional resource consumption. We will first introduce the three components separately, and then introduce the training procedure and inference procedure.Prior Network.The Prior Network with weights Θprior and Θ′prior maps the representation of a click sequence s(j,t) to a prior probability distribution. We set this prior probability distribution as a normal distribution with mean μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN and variance σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior Network.The Posterior Network Ωpost with weights Θpost and Θ′post can enhance the training of the Prior Network by introducing posterior information. It maps the representation concatenated by the representation of the next-item r(j,t) and of the click sequence s(j,t) to a normal distribution.we define the posterior probability distribution as a normal distribution with mean μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN and variance σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item Prediction Network.The Next-item Prediction Network with weights Θc predicts the embedding of the next item ^r(j,t) to be clicked based on the user's click sequence s(j,t) as follows,(13)Equation1313=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^r(j,t)fc(=e(j,t)Ω(s(j,t);Θb),z(j,t);Θc),=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).=^y(j,t)frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training Procedure.In the training procedure, two losses need to be constructed, one is recommendation prediction loss Lrec and the other is distribution difference loss Ldist.Like the way that most recommendation models are trained, Lrec uses the binary cross-entropy loss function l(⋅) to penalize the difference between ^y(j,t) and y(j,t). The difference is that here NPN uses the feature z sampled from the prior distribution Q to replace e in formula 5In addition, Ldist penalizes the difference between the posterior distribution Q and the prior distribution P with the help of the Kullback-Leibler divergence.Ldist "pulls" the posterior and prior distributions towards each other. The formulas for Lrec and Ldist are as follows,(14)Equation1414=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],=LrecEz∼Q(⋅|s(j,t),y(j,t))[l(|y(j,t)^y(j,t))],(15)Equation1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, we optimize DM according to,(16)Equation1616=L(y(j,t),s(j,t))+Lrec⋅βLdist.=L(y(j,t),s(j,t))+Lrec⋅βLdist.During training, the weights are randomly initialized.Inference Procedure. In the inference procedure, the posterior network will be removed from DM because there is no posterior information during the inference procedure. Uncertainty variable u(j,t) is calculated by the multi-sampling outputs as follows:(17)Equation1717=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),=u(j,t)var(=^rifc(Ω(s(j,t);Θb),z(j,t)∼1n;Θc)),where n denotes the sampling times. Specifically, we consider the dimension of ^r(j,t) is ×N1, ^ri(j,t),(k) as the k-th value of the ^ri(j,t) vector, and calculate the variance as follows:(18)Equation1818var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).var(^ri)=∑=k1Nvar^r∼1n(j,t),(k).3.2.4subsubsection3.2.43.2.4§3.2.43.2.4On-edge Model Update3.2.4On-edge Model UpdateMis-Recommendation Score(MRS) is a variable calculated based on the output of MRD and DM, which directly affects whether the model needs to be updated.(19)Equation1919=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)=MRS-1fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation2020=Update1(≤MRSThreshold)=Update1(≤MRSThreshold)In the equation above, 1(⋅) is the indicator function.To get the threshold, we need to collect user data for a period of time, then get the MRS values corresponding to these data on the cloud and sort them, and then set the threshold according to the load of the cloud server. For example, if the load of the cloud server needs to be reduced by 90%, that is, when the load is only 10% of the previous value, only the minimum 10% position value needs to be sent to each edge as the threshold. During inference, each edge determines whether it needs to update the edge model based on equation19 and 20, that is, whether it needs to request new parameters.4section44§44Experiments4ExperimentsWe conducted extensive experiments to evaluate the effectiveness and generalizability of the proposedIntellectReq. We put part of the experimental setup, results and analysis in the Appendix.4.1subsection4.14.1§4.14.1Experimental Setup.4.1Experimental Setup.Datasets. We evaluate on Amazon CDs(CDs), Amazon Electronic(Electronic), Douban Book(Book),three widely used public benchmarks in the recommendation tasks.Evaluation MetricsIn the experiments, we use the widely adopted AUC1footnote11footnote 1Note 0.1% absolute AUC gain is regarded as significant for the CTR task(Yan etal., 2022b; Lv etal., 2023b; Kang and McAuley, 2018; Zhou etal., 2018), UAUC1, HitRate and NDCG as the metrics.Baselines.To verify the applicability, the following representative sequential modeling approaches are implemented and compared with the counterparts combined with the proposed method.DUET(Lv etal., 2023b) and APG(Yan etal., 2022b) are SOTA of EC-CDR, which generate parameters through the edge-cloud collaboration for different tasks. With the cloud generator model, the on-edge model could generalize well to the current data distribution in each session without training on the edge. GRU4Rec(Hidasi etal., 2016), DIN(Zhou etal., 2018), and SASRec(Kang and McAuley, 2018) are three of the most widely used sequential recommendation methods in the academia and industry, which respectively introduce GRU, Attention, and Self-Attention into the recommendation system. LOF(Breunig etal., 2000) and OC-SVM(Tax, 2002) estimate the density of a given point via the ratio of the local reachability of its neighbors and itself. They can be used to detect changes in the distribution of click sequences. For the IntellectReq, we consider SASRec as edge-model unless otherwise stated, but note that IntellectReq broadly applies to lots of sequential recommendation model such as DIN, GRU4Rec, etc.Evaluation Metrics.We use the widely adopted AUC, HitRate, and NDCG as the metrics to evaluate model performance.4.2subsection4.24.2§4.24.2Experimental Results.4.2Experimental Results.4.2.1subsubsection4.2.14.2.1§4.2.14.2.1Quantitative Results.4.2.1Quantitative Results.Figure 5Figure55Figure 55Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 5Performance w.r.t. Request Frequency curve based on previous 1 time difference on-edge dynamic model.Figure 6Figure66Figure 66Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 6Performance w.r.t. Request Frequency based on previous 1 time difference on-edge dynamic model.Figure 7Figure77Figure 77Performance w.r.t. Request Frequency based on on-edge static model.Figure 7Performance w.r.t. Request Frequency based on on-edge static model.Figure5, 6, and 7 summarize the quantitative results of our framework and other methods on CDs and Electronic datasets.The experiments are based on state-of-the-art EC-CDR frameworks such as DUET and APG. As shown in Figure5-6, we combine the parameter generation framework with three sequential recommendation models, DIN, GRU4Rec, SASRec. We evaluate these methods with AUC and UAUC metrics on CDs and Book datasets.We have the following findings:(1) If all edge-model updated at -t1 moment, the DUET framework (DUET) and the APG framework (APG) can be viewed as the upper bound of performance for all methods since DUET and APG are evaluated with fixed 100% request frequency and other methods are evaluated with increasing frequency. If all edge-model are the same as the cloud pretrained model, IntellectReq can even beat DUET, which indicates that in EC-CDR, not all edges need to be updated at every moment. In fact, model parameters generated by user data at some moments can be detrimental to performance.Note that directly comparing the other methods with DUET and APG is not fair as DUET and APG use the fixed 100% request frequency, which could not be deployed in lower request frequency.(2) The random request method (DUET (Random), APG (Random)) works well with any request budget. However, it does not give the optimal request scheme for any request budget in most cases (such as Row.1). The correlation between its performance and Request Frequency tends to be linear.The performances of random request methods are unstable and unpredictable, where these methods outperform other methods in a few cases.(3) LOF (DUET (LOF), APG (LOF)) and OC-SVM (DUET (OC-SVM), APG (OC-SVM)) are two methods that could be used as simple baselines to make the optimal request scheme under a special and specific request budget.However, they have two weaknesses. One is that they consume a lot of resources and thus significantly reduce the calculation speed. The other is they can only work under a specific request budget instead of an arbitrary request budget. For example, in the first line, the Request Frequency of OC-SVM can only be(4) In most cases, our IntellectReq can make the optimal request scheme under any request budget.4.2.2subsubsection4.2.24.2.2§4.2.24.2.2Mis-recommendation score and profit.4.2.2Mis-recommendation score and profit.Figure 8Figure88Figure 88Mis-Recommendation Score and Revenue.Figure 8Mis-Recommendation Score and Revenue.To further study the effectiveness of MDR, we visualize the request timing and revenue in Figure8.As shown in Figure8, we analyze the relationship between request and revenue.Every 100 users were assigned to one of 15 groups, which were selected at random. The Figure is divided into three parts, with the first part used to assess the request and the second and third parts used to assess the benefit.The metric used here is Mis-Recommendation Score (MRS) to evaluate the request revenue. MRS is a metric to measure whether a recommendation will be made in error.In other words, it can be viewed as an evaluation of the model's generalization ability.The probabilities of a mis-recommendation and requesting model parameters are higher and the score is lower.•item1st itemIntellectReq predicts the MRS based on the uncertainty and the click sequences at the moment t and -t1.•item2nd itemDUET (Random) randomly selects edges to request the cloud model to update the parameters of the edges. At this point, MRS can be considered as an arbitrary constant. We take the average value of IntellectReq's MRS as the MRS value.•item3rd itemDUET (w. Request) represents all edge-model be updated at the moment t.•item4th itemDUET (w/o. Request) represents no edge-model be updated at moment -t1 in Figure5 and 6, represents no edge-model be updated at moment 0 in Figure7.•item5th itemRequest Revenue represents the revenue, that is, DUET (w. Request) curve minus DUET (w/o Request).From Figure8, we have the following observations:(1) The trends of MRS and DUET Revenue are typically in the opposite direction, which means that when the MRS value is low, IntellectReq tends to believe that the edge's model cannot generalize well to the current data distribution. Then, the IntellectReq uses the most recent real-time data to request model parameters. As a result, the revenue at this time is frequently positive and relatively high. When the MRS value is high, IntellectReq tends to continue using the model that was updated at the previous moment -t1 instead of t because it believes that the model on the edge can generalize well to the current data distribution. The revenue is frequently low and negative if the model parameters are requested at this point.(2) Since the MRS of DUET (Random) is constant, it cannot predict the revenue of each request. The performance curve changes randomly because of the irregular arrangement order of groups.4.2.3subsubsection4.2.34.2.3§4.2.34.2.3Ablation Study.4.2.3Ablation Study.Figure 9Figure99Figure 99Ablation study on model architecture.Figure 9Ablation study on model architecture.We conducted an ablation study to show the effectiveness of different components in IntellectReq. The results are shown in Figure9.We use w/o. and w. to denote without and with, respectively. From the table, we have the following findings:•item1st itemIntellectReq means both DM and MRD are used.•item2nd item(w/o. DM) means MRD is used but DM is not used.•item3rd item(w/o. MRD) means DM is used but MRD is not used.From the figure and table, we have the following observations:(1) Generally, IntellectReq achieves the best performance with different evaluation metrics in most cases, demonstrating the effectiveness of IntellectReq.(2) When the request frequency is small, the difference between IntellectReq and IntellectReq (w/o. DM) is not immediately apparent, as shown in Fig.9(d). The difference becomes more noticeable when the Request Frequency increases within a certain range. In brief, the difference exhibits the traits of first getting smaller, then larger, and finally smaller.4.2.4subsubsection4.2.44.2.4§4.2.44.2.4Time and Space Cost.4.2.4Time and Space Cost.Most edges have limited storage space, so the on-edge model must be small and sufficient.The edge's computing power is rather limited, and the completion of the recommendation task on the edge requires lots of real-time processing, so the model deployed on the edge must be both simple and fast. Therefore, we analyze whether these methods are controllable and highly profitable based on the DUET framework, and additional time and space resource consumption under this framework is shown in Table1.Table 1Table11Table 11Extra Time and Space Cost on CDs dataset.Table 1Extra Time and Space Cost on CDs dataset.MethodControllableProfitableTime CostSpace Cost (Param.)LOF✗✓/225s11.3ms≈0OC-SVM✗✓/160s9.7ms≈0Random✓✗/0s0.8ms≈0IntellectReq✓✓/11s7.9ms≈5.06kIn the time consumption column, signal ``/'' separates the time consumption of cloud preprocessing and edge inference. Cloud preprocessing means that the cloud server first calculates the MRS value based on recent user data and then determines the threshold based on the communication budget of the cloud server and sends it to the edge. Edge inference refers to the MRS calculated when the click sequence on the edge is updated. The experimental results show that: 1) In terms of time consumption, both cloud preprocessing and edge inference are the fastest for random requests, followed by our IntellectReq. LOF and OC-SVM are the slowest. 2) In terms of space consumption, random, LOF, and OC-SVM can all be regarded as requiring no additional space consumption. In contrast, our method requires the additional deployment of 5.06k parameters on the edge. 3) Random and our IntellectReq can be realized in terms of controllability. It means that edge-cloud communication can be realized under the condition of an arbitrary communication budget, while LOF and OC-SVM cannot. 4) In terms of high yield, LOF, OC-SVM, and our IntellectReq can all be achieved, but random requests cannot.In general, our IntellectReq only requires minimal time consumption (does not affect real-time performance) and space consumption (easy to deploy for smart edges) and can take into account controllability and high profitability.5section55§55Conclusion5ConclusionIn our paper, we argue that under the EC-CDR framework, most communications requesting new parameters for the cloud-based recommendation system are unnecessary due to stable on-edge data distributions. We introduced IntellectReq, a low-resource solution for calculating request value and ensuring adaptive, high-revenue edge-cloud communication. IntellectReq employs a novel edge intelligence task to identify out-of-domain data and uses real-time user behavior mapping to a normal distribution, alongside multi-sampling outputs, to assess the edge model's adaptability to user actions. Our extensive tests across three public benchmarks confirm IntellectReq's efficiency and broad applicability, promoting a more effective edge-cloud collaborative recommendation approach.ACKNOWLEDGMENTThis work was supported by National Key R&D Program of China (No. 2022ZD0119100), Scientific Research Fund of Zhejiang Provincial Education Department (Y202353679), National Natural Science Foundation of China (No. 62376243, 62037001, U20A20387), the StarryNight Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Project by Shanghai AI Laboratory (P22KS00111) and Program of Zhejiang Province Science and Technology (2022C01044)References1(1) 22000Breunig etal.Breunig, Kriegel, Ng, and SanderBreunig etal. (2000)ref:lofMarkusM Breunig, Hans-Peter Kriegel, RaymondT Ng, and Jörg Sander. 2000.LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data. 93–104.32020Cai etal.Cai, Gan, Zhu, and HanCai etal. (2020)ref:finetuningHan Cai, Chuang Gan, Ligeng Zhu, and Song Han. 2020.Tinytl: Reduce activations, not trainable parameters for efficient on-device learning.(2020).42023Cao etal.Cao, Zheng, Hassanzadeh, Lamba, Liu, and LiuCao etal. (2023)cao2023_10.1145/3604237.3626868Defu Cao, Yixiang Zheng, Parisa Hassanzadeh, Simran Lamba, Xiaomo Liu, and Yan Liu. 2023.Large Scale Financial Time Series Forecasting with Multi-faceted Model. In Proceedings of the Fourth ACM International Conference on AI in Finance (<conf-loc>, <city>Brooklyn</city>, <state>NY</state>, <country>USA</country>, </conf-loc>) (ICAIF '23). Association for Computing Machinery, New York, NY, USA, 472–480.https://doi.org/10.1145/3604237.362686852021Chang etal.Chang, Gao, Zheng, Hui, Niu, Song, Jin, and LiChang etal. (2021)ref:surgeJianxin Chang, Chen Gao, Yu Zheng, Yiqun Hui, Yanan Niu, Yang Song, Depeng Jin, and Yong Li. 2021.Sequential recommendation with graph neural networks. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 378–387.62021Chen and WangChen and WangChen and Wang (2021)chen2021multiZhengyu Chen and Donglin Wang. 2021.Multi-Initialization Meta-Learning with Domain Adaptation. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 1390–1394.72022Chen etal.Chen, Xiao, and KuangChen etal. (2022)chen2022baZhengyu Chen, Teng Xiao, and Kun Kuang. 2022.BA-GNN: On Learning Bias-Aware Graph Neural Network. In 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 3012–3024.82023Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2023)chen2023learning_arxivZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2023.Learning to Reweight for Graph Neural Network.arXiv preprint arXiv:2312.12475 (2023).92024Chen etal.Chen, Xiao, Kuang, Lv, Zhang, Yang, Lu, Yang, and WuChen etal. (2024)chen2023learningZhengyu Chen, Teng Xiao, Kun Kuang, Zheqi Lv, Min Zhang, Jinluan Yang, Chengqiang Lu, Hongxia Yang, and Fei Wu. 2024.Learning to Reweight for Generalizable Graph Neural Network.Proceedings of the AAAI conference on artificial intelligence (2024).102021Chen etal.Chen, Xu, and WangChen etal. (2021)chen2021deepZhengyu Chen, Ziqing Xu, and Donglin Wang. 2021.Deep transfer tensor decomposition with orthogonal constraint for recommender systems. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol.35. 4010–4018.112017Ha etal.Ha, Dai, and LeHa etal. (2017)ref:hypernetwork_pioneering1David Ha, Andrew Dai, and QuocV Le. 2017.Hypernetworks.(2017).122016Hidasi etal.Hidasi, Karatzoglou, Baltrunas, and TikkHidasi etal. (2016)ref:gru4recBalázs Hidasi, Alexandros Karatzoglou, Linas Baltrunas, and Domonkos Tikk. 2016.Session-based recommendations with recurrent neural networks.International Conference on Learning Representations 2016 (2016).132023Huang etal.Huang, Huang, Yang, Ren, Liu, Li, Ye, Liu, Yin, and ZhaoHuang etal. (2023)huang2023makeRongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. 2023.Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.arXiv preprint arXiv:2301.12661 (2023).142022aHuang etal.Huang, Lam, Wang, Su, Yu, Ren, and ZhaoHuang etal. (2022a)DBLP:conf/ijcai/HuangL0S00Z22Rongjie Huang, Max W.Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, and Zhou Zhao. 2022a.FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis. In IJCAI. ijcai.org, 4157–4163.152022bHuang etal.Huang, Ren, Liu, Cui, and ZhaoHuang etal. (2022b)huang2022generspeechRongjie Huang, Yi Ren, Jinglin Liu, Chenye Cui, and Zhou Zhao. 2022b.Generspeech: Towards style transfer for generalizable out-of-domain text-to-speech.Advances in Neural Information Processing Systems 35 (2022), 10970–10983.162023aJi etal.Ji, Liang, Liao, Fei, and FengJi etal. (2023a)ji2023partialWei Ji, Renjie Liang, Lizi Liao, Hao Fei, and Fuli Feng. 2023a.Partial Annotation-based Video Moment Retrieval via Iterative Learning. In Proceedings of the 31th ACM international conference on Multimedia.172023bJi etal.Ji, Liu, Zhang, Wei, and WangJi etal. (2023b)ji2023onlineWei Ji, Xiangyan Liu, An Zhang, Yinwei Wei, and Xiang Wang. 2023b.Online Distillation-enhanced Multi-modal Transformer for Sequential Recommendation. In Proceedings of the 31th ACM international conference on Multimedia.182018Kang and McAuleyKang and McAuleyKang and McAuley (2018)ref:sasrecWang-Cheng Kang and Julian McAuley. 2018.Self-attentive sequential recommendation. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 197–206.192021Latifi etal.Latifi, Mauro, and JannachLatifi etal. (2021)latifi2021sessionSara Latifi, Noemi Mauro, and Dietmar Jannach. 2021.Session-aware recommendation: A surprising quest for the state-of-the-art.Information Sciences 573 (2021), 291–315.202023eLi etal.Li, Xiao, Zheng, Wu, and CuiLi etal. (2023e)li2023propensityHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, and Peng Cui. 2023e.Propensity matters: Measuring and enhancing balancing for recommendation. In International Conference on Machine Learning. PMLR, 20182–20194.212024Li etal.Li, Xiao, Zheng, Wu, Geng, Chen, and CuiLi etal. (2024)li2024kernelHaoxuan Li, Yanghao Xiao, Chunyuan Zheng, Peng Wu, Zhi Geng, Xu Chen, and Peng Cui. 2024.Debiased Collaborative Filtering with Kernel-based Causal Balancing. In International Conference on Learning Representations.222022aLi etal.Li, He, Wei, Qian, Zhu, Xie, Zhuang, Tian, and TangLi etal. (2022a)li2022fineJuncheng Li, Xin He, Longhui Wei, Long Qian, Linchao Zhu, Lingxi Xie, Yueting Zhuang, Qi Tian, and Siliang Tang. 2022a.Fine-grained semantically aligned vision-language pre-training.Advances in neural information processing systems 35 (2022), 7290–7303.232023aLi etal.Li, Pan, Ge, Gao, Zhang, Ji, Zhang, Chua, Tang, and ZhuangLi etal. (2023a)li2023finetuningJuncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, and Yueting Zhuang. 2023a.Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions.arXiv preprint arXiv:2308.04152 (2023).242023bLi etal.Li, Wang, Qin, Ji, and LiangLi etal. (2023b)lili_10.1145/3581783.3611847Li Li, Chenwei Wang, You Qin, Wei Ji, and Renjie Liang. 2023b.Biased-Predicate Annotation Identification via Unbiased Visual Predicate Representation. In Proceedings of the 31st ACM International Conference on Multimedia (<conf-loc>, <city>Ottawa ON</city>, <country>Canada</country>, </conf-loc>) (MM '23). Association for Computing Machinery, New York, NY, USA, 4410–4420.https://doi.org/10.1145/3581783.3611847252023dLi etal.Li, Wang, Zhang, Miao, Zhao, Zhang, Ji, and WuLi etal. (2023d)li2023winnerMengze Li, Han Wang, Wenqiao Zhang, Jiaxu Miao, Zhou Zhao, Shengyu Zhang, Wei Ji, and Fei Wu. 2023d.Winner: Weakly-supervised hierarchical decomposition and alignment for spatio-temporal video grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 23090–23099.262023cLi etal.Li, Wang, Xu, Han, Zhang, Zhao, Miao, Zhang, Pu, and WuLi etal. (2023c)li2023multiMengze Li, Tianbao Wang, Jiahe Xu, Kairong Han, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Shiliang Pu, and Fei Wu. 2023c.Multi-modal Action Chain Abductive Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 4617–4628.272022bLi etal.Li, Wang, Zhang, Zhang, Zhao, Miao, Zhang, Tan, Wang, Wang, etal.Li etal. (2022b)li2022endMengze Li, Tianbao Wang, Haoyu Zhang, Shengyu Zhang, Zhou Zhao, Jiaxu Miao, Wenqiao Zhang, Wenming Tan, Jin Wang, Peng Wang, etal. 2022b.End-to-End Modeling via Information Tree for One-Shot Natural Language Spatial Video Grounding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 8707–8717.282023Lin etal.Lin, Xu, Wang, Zhang, and FengLin etal. (2023)lin2023mitigatingXin-Yu Lin, Yi-Yan Xu, Wen-Jie Wang, Yang Zhang, and Fu-Li Feng. 2023.Mitigating Spurious Correlations for Self-supervised Recommendation.Machine Intelligence Research 20, 2 (2023), 263–275.292022Lv etal.Lv, Wang, Zhang, Kuang, Yang, and WuLv etal. (2022)lv2022personalizingZheqi Lv, Feng Wang, Shengyu Zhang, Kun Kuang, Hongxia Yang, and Fei Wu. 2022.Personalizing Intervened Network for Long-tailed Sequential User Behavior Modeling.arXiv preprint arXiv:2208.09130 (2022).302023aLv etal.Lv, Wang, Zhang, Zhang, Kuang, and WuLv etal. (2023a)lv2023parametersZheqi Lv, Feng Wang, Shengyu Zhang, Wenqiao Zhang, Kun Kuang, and Fei Wu. 2023a.Parameters Efficient Fine-Tuning for Long-Tailed Sequential Recommendation. In CAAI International Conference on Artificial Intelligence. Springer, 442–459.312023bLv etal.Lv, Zhang, Zhang, Kuang, Wang, Wang, Chen, Shen, Yang, Ooi, and WuLv etal. (2023b)ref:duetZheqi Lv, Wenqiao Zhang, Shengyu Zhang, Kun Kuang, Feng Wang, Yongwei Wang, Zhengyu Chen, Tao Shen, Hongxia Yang, BengChin Ooi, and Fei Wu. 2023b.DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization. In Proceedings of the ACM Web Conference 2023.322021Marfoq etal.Marfoq, Neglia, Bellet, Kameni, and VidalMarfoq etal. (2021)ref:federated_multi_task2Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, and Richard Vidal. 2021.Federated multi-task learning under a mixture of distributions.Advances in Neural Information Processing Systems 34 (2021), 15434–15447.332017McMahan etal.McMahan, Moore, Ramage, Hampson, and yArcasMcMahan etal. (2017)ref:federated_fedavgBrendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and BlaiseAguera y Arcas. 2017.Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.342021Mills etal.Mills, Hu, and MinMills etal. (2021)ref:federated_multi_taskJed Mills, Jia Hu, and Geyong Min. 2021.Multi-task federated learning for personalised deep neural networks in edge computing.IEEE Transactions on Parallel and Distributed Systems 33, 3 (2021), 630–641.352022Qian etal.Qian, Xu, Lv, Zhang, Jiang, Liu, Zeng, Chua, and WuQian etal. (2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022Xufeng Qian, Yue Xu, Fuyu Lv, Shengyu Zhang, Ziwen Jiang, Qingwen Liu, Xiaoyi Zeng, Tat-Seng Chua, and Fei Wu. 2022.Intelligent Request Strategy Design in Recommender System. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 3772–3782.362020Qin etal.Qin, Lv, Wang, Hu, and WuQin etal. (2020)qin2020healthFang-Yu Qin, Zhe-Qi Lv, Dan-Ni Wang, Bo Hu, and Chao Wu. 2020.Health status prediction for the elderly based on machine learning.Archives of gerontology and geriatrics 90 (2020), 104121.372010Rendle etal.Rendle, Freudenthaler, and Schmidt-ThiemeRendle etal. (2010)ref:fpmcSteffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2010.Factorizing personalized Markov chains for next-basket recommendation.the web conference (2010).382019Sanh etal.Sanh, Debut, Chaumond, and WolfSanh etal. (2019)ref:disitllVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019.DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter.arXiv preprint arXiv:1910.01108 (2019).392023aSu etal.Su, Chen, Lin, Li, Liu, and ZhengSu etal. (2023a)su2023personalizedJiajie Su, Chaochao Chen, Zibin Lin, Xi Li, Weiming Liu, and Xiaolin Zheng. 2023a.Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation. In Proceedings of the 31st ACM International Conference on Multimedia. 6321–6331.402023bSu etal.Su, Chen, Liu, Wu, Zheng, and LyuSu etal. (2023b)su2023enhancingJiajie Su, Chaochao Chen, Weiming Liu, Fei Wu, Xiaolin Zheng, and Haoming Lyu. 2023b.Enhancing Hierarchy-Aware Graph Networks with Deep Dual Clustering for Session-based Recommendation. In Proceedings of the ACM Web Conference 2023. 165–176.412019Sun etal.Sun, Liu, Wu, Pei, Lin, Ou, and JiangSun etal. (2019)ref:bert4recFei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019.BERT4Rec: Sequential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450.422024aTang etal.Tang, Lv, Zhang, Wu, and KuangTang etal. (2024a)tang2024modelgptZihao Tang, Zheqi Lv, Shengyu Zhang, Fei Wu, and Kun Kuang. 2024a.ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation.arXiv preprint arXiv:2402.12408 (2024).432024bTang etal.Tang, Lv, Zhang, Zhou, Duan, Kuang, and WuTang etal. (2024b)tang2024oodkdZihao Tang, Zheqi Lv, Shengyu Zhang, Yifan Zhou, Xinyu Duan, Kun Kuang, and Fei Wu. 2024b.AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation. In 12th International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net.https://openreview.net/forum?id=fcqWJ8JgMR442002TaxTaxTax (2002)ref:ocsvmDavid MartinusJohannes Tax. 2002.One-class classification: Concept learning in the absence of counter-examples.(2002).452023Tong etal.Tong, Yuan, Zhang, Zhu, Zhang, Wu, and KuangTong etal. (2023)DBLP:conf/kdd/TongYZZZWK23Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, and Kun Kuang. 2023.Quantitatively Measuring and Contrastively Exploring Heterogeneity for Domain Generalization. In KDD. ACM, 2189–2200.462017Wang etal.Wang, Cui, Wang, Pei, Zhu, and YangWang etal. (2017)wang2017communityXiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. 2017.Community preserving network embedding. In Proceedings of the AAAI conference on artificial intelligence, Vol.31.472019Wu etal.Wu, Tang, Zhu, Wang, Xie, and TanWu etal. (2019)ref:srgnnShu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, and Tieniu Tan. 2019.Session-based recommendation with graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, Vol.33. 346–353.482023aWu etal.Wu, Lu, Zhang, Jatowt, Feng, Sun, Wu, and KuangWu etal. (2023a)wu2023focusYiquan Wu, Weiming Lu, Yating Zhang, Adam Jatowt, Jun Feng, Changlong Sun, Fei Wu, and Kun Kuang. 2023a.Focus-aware response generation in inquiry conversation. In Findings of the Association for Computational Linguistics: ACL 2023. 12585–12599.492023bWu etal.Wu, Zhou, Liu, Lu, Liu, Zhang, Sun, Wu, and KuangWu etal. (2023b)wu2023precedentYiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, and Kun Kuang. 2023b.Precedent-Enhanced Legal Judgment Prediction with LLM and Domain-Model Collaboration.arXiv preprint arXiv:2310.09241 (2023).502024XinyuLin and ChuaXinyuLin and ChuaXinyuLin and Chua (2024)lin2023temporallyJujia Zhao Yongqi Li FuliFeng XinyuLin, WenjieWang and Tat-Seng Chua. 2024.Temporally and Distributionally Robust Optimization for Cold-start Recommendation. In AAAI.512022bYan etal.Yan, Wang, Zhang, Li, Xu, and ZhengYan etal. (2022b)ref:apg_rs1Bencheng Yan, Pengjie Wang, Kai Zhang, Feng Li, Jian Xu, and Bo Zheng. 2022b.APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction. In Advances in Neural Information Processing Systems.522022aYan etal.Yan, Niu, Gu, Wu, Tang, Hua, Lyu, and ChenYan etal. (2022a)ref:edge_cloud2Yikai Yan, Chaoyue Niu, Renjie Gu, Fan Wu, Shaojie Tang, Lifeng Hua, Chengfei Lyu, and Guihai Chen. 2022a.On-Device Learning for Model Personalization with Large-Scale Cloud-Coordinated Domain Adaption. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 2180–2190.532022aYao etal.Yao, Wang, Ding, Chen, Han, Zhou, and YangYao etal. (2022a)ref:edge_cloudJiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, and Hongxia Yang. 2022a.Device-cloud Collaborative Recommendation via Meta Controller. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. 4353–4362.542022bYao etal.Yao, Zhang, Yao, Wang, Ma, Zhang, Chu, Ji, Jia, Shen, etal.Yao etal. (2022b)ref:edge_cloud_surveyJiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, etal. 2022b.Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI.IEEE Transactions on Knowledge and Data Engineering (2022).552022aZhang etal.Zhang, Kuang, Chen, Liu, Wu, and XiaoZhang etal. (2022a)zhang2022fairnessFengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, and Jun Xiao. 2022a.Fairness-aware contrastive learning with partially annotated sensitive attributes. In The Eleventh International Conference on Learning Representations.562023bZhang etal.Zhang, Kuang, Chen, You, Shen, Xiao, Zhang, Wu, Wu, Zhuang, etal.Zhang etal. (2023b)zhang2023federatedFengda Zhang, Kun Kuang, Long Chen, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang, Chao Wu, Fei Wu, Yueting Zhuang, etal. 2023b.Federated unsupervised representation learning.Frontiers of Information Technology & Electronic Engineering 24, 8 (2023), 1181–1193.572023aZhang etal.Zhang, Feng, Kuang, Zhang, Zhao, Yang, Chua, and WuZhang etal. (2023a)zhangsy2023personalizedShengyu Zhang, Fuli Feng, Kun Kuang, Wenqiao Zhang, Zhou Zhao, Hongxia Yang, Tat-Seng Chua, and Fei Wu. 2023a.Personalized Latent Structure Learning for Recommendation.IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).582020Zhang etal.Zhang, Jiang, Wang, Kuang, Zhao, Zhu, Yu, Yang, and WuZhang etal. (2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20Shengyu Zhang, Tan Jiang, Tan Wang, Kun Kuang, Zhou Zhao, Jianke Zhu, Jin Yu, Hongxia Yang, and Fei Wu. 2020.DeVLBert: Learning Deconfounded Visio-Linguistic Representations. In MM '20: The 28th ACM International Conference on Multimedia. ACM, 4373–4382.592023cZhang etal.Zhang, Liu, Zeng, Ooi, Tang, and ZhuangZhang etal. (2023c)zhang2023learningWenqiao Zhang, Changshuo Liu, Lingze Zeng, Bengchin Ooi, Siliang Tang, and Yueting Zhuang. 2023c.Learning in Imperfect Environment: Multi-Label Classification with Long-Tailed Distribution and Partial Labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1423–1432.602024Zhang and LvZhang and LvZhang and Lv (2024)zhang2024revisitingWenqiao Zhang and Zheqi Lv. 2024.Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.612021Zhang etal.Zhang, Shi, Guo, Zhang, Cai, Li, Luo, and ZhuangZhang etal. (2021)zhang2021magicWenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai, Juncheng Li, Sihui Luo, and Yueting Zhuang. 2021.MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and Unpaired Text-based Image Captioning.arXiv preprint arXiv:2112.06558 (2021).622022bZhang etal.Zhang, Zhu, Hallinan, Zhang, Makmur, Cai, and OoiZhang etal. (2022b)zhang2022boostmisWenqiao Zhang, Lei Zhu, James Hallinan, Shengyu Zhang, Andrew Makmur, Qingpeng Cai, and BengChin Ooi. 2022b.Boostmis: Boosting medical image semi-supervised learning with adaptive pseudo labeling and informative active annotation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20666–20676.632024Zhang etal.Zhang, Zhu, Song, Koniusz, King, etal.Zhang etal. (2024)zhang2024mitigatingYifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, Irwin King, etal. 2024.Mitigating the Popularity Bias of Graph Collaborative Filtering: A Dimensional Collapse Perspective.Advances in Neural Information Processing Systems 36 (2024).642018Zhou etal.Zhou, Zhu, Song, Fan, Zhu, Ma, Yan, Jin, Li, and GaiZhou etal. (2018)ref:dinGuorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. 2018.Deep interest network for click-through rate prediction. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1059–1068.652023aZhu etal.Zhu, Li, Shao, Hao, Wu, Kuang, Xiao, and WuZhu etal. (2023a)DBLP:conf/mm/ZhuL0HWK0W23Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, and Chao Wu. 2023a.Generalized Universal Domain Adaptation with Generative Flow Networks. In ACM Multimedia. ACM, 8304–8315.662023bZhu etal.Zhu, Li, Yuan, Li, Kuang, and WuZhu etal. (2023b)zhu2023universalDidi Zhu, Yinchuan Li, Junkun Yuan, Zexi Li, Kun Kuang, and Chao Wu. 2023b.Universal domain adaptation via compressive attention matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 6974–6985.Appendix AAppendixAAAppendix AAppendix AAppendixAAppendixThis is the Appendix for ``Intelligent Model Update Strategy for Sequential Recommendation''.A.1subsectionA.1A.1§A.1A.1Supplementary MethodA.1Supplementary MethodA.1.1subsubsectionA.1.1A.1.1§A.1.1A.1.1Notations and DefinitionsA.1.1Notations and DefinitionsWe summarize notations and definitions in the Table2.Table 2Table22Table 22Notations and DefinitionsTable 2Notations and DefinitionsNotationDefinitionuUservItemsBehavior sequencedEdge=D{d(i)}=i1NdSet of edgesSH(i), SR(i), SMRDHistory samples, Real-time samples, MRD samplesNd, NH(i) and NR(i)The number of edges, The number of history data, The number of real-time dataΘg, Θd, ΘMRDParameters of the global cloud model, Parameters of the local edge modelMg(⋅;Θg), Md(i)(⋅;Θd(i)), Mc(i)t(SMRD;ΘMRD)Global cloud model, Local edge recommendation model, Local edge control modelLrec, LMRDLoss function of recommendation, Loss function of mis-recommendationΩFeature extractorA.1.2subsubsectionA.1.2A.1.2§A.1.2A.1.2Optimization TargetA.1.2Optimization TargetTo describe it in the simplest way, we assume that the set of the edges is =D{d(i)}=i1Nd, the set updated using the baseline method is =D′u{d(i)}=i1N′u, the set updated using our method is =Du{d(i)}=i1Nu. Nd, N′u, and Nu are the amount of the D, D′u and Du, respectively. The communication upper bound is set to Nthres. Suppose the ground-truth value y, and the prediction of the baseline models ^y′, and the prediction of our model ^y are row vectors.Therefore, our optimization target is to obtain the highest performance of the model while limiting the upper bound of the communication frequency.(21)Equation2121Maximize^yyT,Maximize^yyT,Subject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≤NuN′u,≤NuN′u,⊂DuD.⊂DuD.In this case, the improvement of our method is =Δ-^yyT^y′yT.Or it can also be regarded as reducing the communication frequency without degrading performance.(22)Equation2222MinimizeNuMinimizeNuSubject to0≤Nu≤Nthres,Subject to0≤Nu≤Nthres,≥^yyT^y′yT,≥^yyT^y′yT,⊂DuD⊂DuDIn this case, the improvement of our method is =Δ-NNu.A.2subsectionA.2A.2§A.2A.2Supplementary Experimental ResultsA.2Supplementary Experimental ResultsA.2.1subsubsectionA.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We evaluate IntellectReq and baselines on Amazon CDs(CDs)2footnote22footnote 2https://jmcauley.ucsd.edu/data/amazon/, Amazon Electronic(Electronic)2, Douban Book(Book)3footnote33footnote 3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, three widely used public benchmarks in the recommendation tasks, Table3 shows the statistics. Following conventional practice, all user-item pairs in the dataset are treated as positive samples. To conduct sequential recommendation experiments, we arrange the items clicked by the user into a sequence in the order of timestamps.We also refer to (Zhou etal., 2018; Kang and McAuley, 2018; Hidasi etal., 2016), which is negatively sampled at :14 and :199 in the training set and testing set, respectively. Negative sampling considers all user-item pairs that do not exist in the dataset as negative samples.Table 3Table33Table 33Statistics of Datasets.Table 3Statistics of Datasets.Amazon CDsAmazon ElectronicDouban Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsectionA.2.2A.2.2§A.2.2A.2.2Evaluation MetricsA.2.2Evaluation MetricsIn the experiments, we use the widely adopted AUC, Logloss, HitRate and NDCG as the metrics to evaluate model performance.They are defined by the following equations.(23)Equation2323=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,=AUC∑∈x0DT∑∈x1DF1[<f(x1)f(x0)]|DT||DF|,(24)Equation2424=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,=UAUC1|U|∑∈uU∑∈x0DuT∑∈x1DuF1[<f(x1)f(x0)]|DuT||DuF|,(25)Equation2525=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),=NDCG@K∑∈uU1|U|-21(≤Ru,guK)1log2(+1(≤Ru,guK)1),(26)Equation2626=HitRate@K1|U|∑∈uU1(≤Ru,guK),=HitRate@K1|U|∑∈uU1(≤Ru,guK),In the equation above, 1(⋅) is the indicator function. f is the model to be evaluated. Ru,gu is the rank predicted by the model for the ground truth item gu and user u. DT, DF is the positive and negative testing sample set, respectively, and DuT, DuF is the positive and negative testing sample set for user u respectively.A.2.3subsubsectionA.2.3A.2.3§A.2.3A.2.3Request Frequency and ThresholdA.2.3Request Frequency and ThresholdFigure10 shows that the relationship between request frequency and different threshold.Figure 10Figure1010Figure 1010Request frequency w.r.t. different thresholdFigure 10Request frequency w.r.t. different thresholdA.3subsectionA.3A.3§A.3A.3Training Procedure and Inference ProcedureA.3Training Procedure and Inference ProcedureIn this section, we describe the overall pipeline in detail in conjunction with Figure11.Figure 11Figure1111Figure 1111The overall pipeline of our proposed IntellectReq.Figure 11The overall pipeline of our proposed IntellectReq.1. Training Procedure① We first pre-trained a EC-CDR framework, and EC-CDR can use data to generate model parameters.② MRD training procedure. 1) Construct the MRD dataset. We assume that the time at this time is T, and then we use the model parameters generated by the data at moment =t0 under the EC-CDR framework, and the model is applied to the data at the current moment =tT. At this point, we can get a prediction result ^y, compare ^y with y to get whether the model do mis-recommendation. We then repeat the data used for parameter generation from =t0 to =t-T1, which constructs an MRD dataset. It contains three columns, namely: the data used for parameter generation (x1), the current data (x2), and whether it do mis-recommendation (yMRD). 2) Train MRD. MRD is a fully connected neural network that takes x1 and x2 as input and fits the mis-recommendation label yMRD. And then we get the MRD. MRD can be used to determine whether the model parameters generated using the data at a certain moment before are still valid for the current data. The prediction result output by MRD can be simply considered as Mis-Recommendation Score (MRS).③ DM training procedure. We map the data into a Gaussian distribution through the Conditional-VAE method, and then sample the feature vector from the distribution to complete the next-item prediction task, that is, to predict the item that the user will click next. Then we can get DM. DM can calculate multiple next-items by sampling from the distribution multiple times, which can be used to calculate Uncertainty.④ Joint training procedure of MRD and DM. We use a fully connected neural network, denoted as f(⋅), and use MRS and Uncertainty as input to fit yMRD in the MRD dataset, which is the Mis-Recommendation Label.2. Inference ProcedureThe MRS is calculated using all recent user data on the cloud, and the threshold of the MRS is determined according to the load. Then send this threshold to each edge. The edge has updated the model at a certain moment =tn,<nT before, and now whether it is necessary to continue to update the model at moment =tT, that is, whether the model is invalid for the current data distribution? We only need to input the MRD and Uncertainty calculated by the data at the moment =tn data and the data at the moment =tT into f(⋅) for determine. In fact, what we output is a invalid degree, which is a continuous value between 0 and 1. Whether to update the edge model depends on the threshold calculated on the cloud based on the load.A.4subsectionA.4A.4§A.4A.4Hyperparameters and Training SchedulesA.4Hyperparameters and Training SchedulesWe summarize the hyperparameters and training schedules of IntellectReq on the three datasets in Table4.Table 4Table44Table 44Hyperparameters and training schedules.Table 4Hyperparameters and training schedules.DatasetParametersSetting Amazon CDsAmazon ElectronicDouban Book GPUTesla A100OptimizerAdam Learning rate0.001 Batch size1024 Sequence length30 the Dimension of z1×64N32n10A.4.1subsubsectionA.4.1A.4.1§A.4.1A.4.1Impact on the Real World.A.4.1Impact on the Real World.A case based on a dynamic model from the previous moment is as follows. If it were based on a on-edge static model, the improvement would be much more significant.We found some more intuitive data and examples to show the challenge and IntellectReq's impact on the real world:Table 5Table55Table 55IntellectReq's Impact on Real World.Table 5IntellectReq's Impact on Real World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) We calculate the number of bytes and FLOPs required to update a parameter. Bytes: 48.5kB, FLOPs: 1.53M. That is, updating a model on the edge requires the transmission of 48.5kB data through edge-cloud communication, and consumes 1.53M computing power of the cloud model. (2) According to the report, Google processes 99,000 clicks per second, so it needs to transmit 48.5kB∗99k=4.69GB per second, and consume 1.53M∗99k=152.46G of computing power in the cloud server. Alibaba processes 1,150,000 clicks per second, so it needs to transmit 48.5kB∗1150k=53.19GB per second, and consume 1.53M∗1150k=1.68T of computing power in the cloud server. These are not the peak value yet. Obviously, such a huge loan and computing power consumption make it hard to update the model for edges every moment especially at peak times. (3) Sometimes, the distributed nature of clouds today may can afford the computational volume, since it can call enough servers to support edge-cloud collaboration. However, the huge resource consumption is impractical in real-scenario. Besides, according to our empirical study, our IntellectReq can bring 21.4% resource saving when the performance is the same using the APG framework. Under the DUET framework, IntellectReq can bring 16.6% resource saving when the performance is the same. Summing up, IntellectReq can save 19% resources on average, which is very helpful for cost control and can facilitate the EC-CDR development in practice. The following Table5 is the comparison between our method IntellectReq and EC-CDR in the amount of transmitted data and the computing power consumed on the cloud. (4) During the peak period, resources will be tight and cause freezes or even crashes. This is still in the case that EC-CDR has not been deployed yet, that is, the edge-cloud communication only performs the most basic user data transmission. Then, IntellectReq can achieve better performance than EC-CDR under any resource limit ϵ, or to achieve the performance that EC-CDR requires +ϵ%19 of resources to achieve.EC-CDR : under⏟ start_ARG caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ( { caligraphic_S start_POSTSUBSCRIPT italic_H start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_N start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ; roman_Θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT roman_Global roman_Cloud roman_Model end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG end_ARG start_ARG [ end_ARG end_RELOP italic_P italic_a italic_r italic_a italic_m italic_e italic_t italic_e italic_r italic_s ] Data start_RELOP start_ROW start_CELL ← end_CELL end_ROW start_ROW start_CELL - end_CELL end_ROW end_RELOP start_RELOP start_ROW start_CELL - end_CELL end_ROW start_ROW start_CELL - end_CELL end_ROW end_RELOP start_RELOP start_ROW start_CELL - end_CELL end_ROW start_ROW start_CELL → end_CELL end_ROW end_RELOP under⏟ start_ARG caligraphic_M start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( caligraphic_S start_POSTSUBSCRIPT italic_R start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ; roman_Θ start_POSTSUBSCRIPT italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT roman_Local roman_Edge roman_Model end_POSTSUBSCRIPT . italic_T italic_o italic_d italic_e italic_t italic_e italic_r italic_m italic_i italic_n italic_e italic_w italic_h italic_e italic_t italic_h italic_e italic_r italic_t italic_o italic_r italic_e italic_q italic_u italic_e italic_s italic_t italic_p italic_a italic_r italic_a italic_m italic_e italic_t italic_e italic_r italic_s italic_f italic_r italic_o italic_m italic_t italic_h italic_e italic_c italic_l italic_o italic_u italic_d , italic_I italic_n italic_t italic_e italic_l italic_l italic_e italic_c italic_t italic_R italic_e italic_q italic_u italic_s italic_e italic_s caligraphic_S__MRDtolearnaMis-RecommendationDetector,whichdecideswhethertoupdatetheedgemodelbytheEC-CDRframework.S__MRDisthedatasetconstructedbasedonS_HwithoutanyadditionalannotationsfortrainingIntellectReq.Θ__MRDdenotesthelearnedparametersforthelocalMRDmodel.(26)Equation caligraphic_2626IntellectReq:⏟Mc(i)t(SMRD;ΘMRD)LocalEdgeModelControl→⏟(Mg[Parameters]Data←----→Md(i))EC-CDR.3.2subsection caligraphic_3.23.2§3.23.2IntellectReq3.2IntellectReqFigure caligraphic_is caligraphic_the caligraphic_overview caligraphic_of caligraphic_Recommendation caligraphic_model, caligraphic_EC-CDR, caligraphic_and caligraphic_IntellectReq caligraphic_framework caligraphic_which caligraphic_consists caligraphic_of caligraphic_Mis-Recommendation caligraphic_Detector caligraphic_(MRD) caligraphic_and caligraphic_Distribution caligraphic_Mapper caligraphic_(DM) caligraphic_to caligraphic_achieve caligraphic_high caligraphic_revenue caligraphic_under caligraphic_any caligraphic_requested caligraphic_budget. caligraphic_We caligraphic_first caligraphic_introduce caligraphic_the caligraphic_EC-CDR, caligraphic_and caligraphic_then caligraphic_present caligraphic_IntellectReq, caligraphic_which caligraphic_we caligraphic_propose caligraphic_to caligraphic_overcome caligraphic_the caligraphic_frequent caligraphic_and caligraphic_low-revenue caligraphic_drawbacks caligraphic_of caligraphic_EC-CDR caligraphic_requests. caligraphic_IntellectReq caligraphic_achieves caligraphic_high caligraphic_communication caligraphic_revenue caligraphic_under caligraphic_any caligraphic_edge-cloud caligraphic_communication caligraphic_budget caligraphic_in caligraphic_EC-CDR. caligraphic_MRD caligraphic_can caligraphic_determine caligraphic_whether caligraphic_to caligraphic_request caligraphic_parameters caligraphic_from caligraphic_the caligraphic_cloud caligraphic_model caligraphic_Mg caligraphic_or caligraphic_to caligraphic_use caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_Md caligraphic_based caligraphic_on caligraphic_real-time caligraphic_data caligraphic_SR(i). caligraphic_DM caligraphic_helps caligraphic_MRD caligraphic_make caligraphic_further caligraphic_judgments caligraphic_by caligraphic_discriminating caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_data caligraphic_semantics.3.2.1subsubsection caligraphic_3.2.13.2.1§3.2.13.2.1The caligraphic_framework caligraphic_of caligraphic_EC-CDR3.2.1The caligraphic_framework caligraphic_of caligraphic_EC-CDRIn caligraphic_EC-CDR, caligraphic_a caligraphic_recommendation caligraphic_model caligraphic_with caligraphic_a caligraphic_static caligraphic_layers caligraphic_and caligraphic_a caligraphic_dynamic caligraphic_layers caligraphic_will caligraphic_be caligraphic_trained caligraphic_for caligraphic_the caligraphic_global caligraphic_cloud caligraphic_model caligraphic_development. caligraphic_The caligraphic_goal caligraphic_of caligraphic_the caligraphic_EC-CDR caligraphic_can caligraphic_thus caligraphic_be caligraphic_formulated caligraphic_as caligraphic_the caligraphic_following caligraphic_optimization caligraphic_problem:(3)Equation caligraphic_33^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),^y(j)H(i)=frec(Ω(x(j)H(i);Θgb);Θgc),Lrec=∑i=1Nd∑j=1NR(i)Dce(y(j)H(i),^y(j)H(i)),Lrec=∑i=1Nd∑j=1NR(i)Dce(y(j)H(i),^y(j)H(i)),where caligraphic_Dce(⋅;Θgb) caligraphic_denotes caligraphic_the caligraphic_cross-entropy caligraphic_between caligraphic_two caligraphic_probability caligraphic_distributions, caligraphic_frec(⋅) caligraphic_denotes caligraphic_the caligraphic_dynamic caligraphic_layers caligraphic_of caligraphic_the caligraphic_recommendation caligraphic_model, caligraphic_Ω(x(j)H(i);Θgb) caligraphic_is caligraphic_the caligraphic_static caligraphic_layers caligraphic_extracting caligraphic_features caligraphic_from caligraphic_x(j)H(i). caligraphic_EC-CDR caligraphic_is caligraphic_decoupled caligraphic_edge-model caligraphic_with caligraphic_a caligraphic_``static caligraphic_layers'' caligraphic_and caligraphic_``dynamic caligraphic_layers'' caligraphic_training caligraphic_scheme caligraphic_to caligraphic_achieve caligraphic_better caligraphic_personalization. caligraphic_The caligraphic_primary caligraphic_factor caligraphic_enhancing caligraphic_the caligraphic_on-edge caligraphic_model's caligraphic_generalization caligraphic_to caligraphic_real-time caligraphic_data caligraphic_through caligraphic_EC-CDR caligraphic_is caligraphic_its caligraphic_dynamic caligraphic_layers. caligraphic_Upon caligraphic_completion caligraphic_of caligraphic_training, caligraphic_the caligraphic_static caligraphic_layers' caligraphic_parameters caligraphic_remain caligraphic_static, caligraphic_denoted caligraphic_as caligraphic_Θgb, caligraphic_as caligraphic_determined caligraphic_by caligraphic_Eq. caligraphic_. caligraphic_Conversely, caligraphic_the caligraphic_dynamic caligraphic_layers' caligraphic_parameters, caligraphic_represented caligraphic_by caligraphic_Θgc, caligraphic_are caligraphic_dynamically caligraphic_generated caligraphic_based caligraphic_on caligraphic_real-time caligraphic_data caligraphic_by caligraphic_the caligraphic_cloud caligraphic_generator.In caligraphic_edge caligraphic_inference, caligraphic_the caligraphic_cloud-based caligraphic_parameter caligraphic_generator caligraphic_uses caligraphic_the caligraphic_real-time caligraphic_click caligraphic_sequence caligraphic_s(j,t)R(i)∈SR(i) caligraphic_to caligraphic_generate caligraphic_the caligraphic_parameters,(4)Equation caligraphic_44h(n)R(i)=L(n)layer(e(j,t)R(i)=Eshared(s(j,t)R(i))),∀n=1,⋯,Nl,h(n)R(i)=L(n)layer(e(j,t)R(i)=Eshared(s(j,t)R(i))),∀n=1,⋯,Nl,where caligraphic_Eshare(⋅) caligraphic_represents caligraphic_the caligraphic_shared caligraphic_encoder. caligraphic_L(n)layer(⋅) caligraphic_is caligraphic_a caligraphic_linear caligraphic_layer caligraphic_used caligraphic_to caligraphic_adjust caligraphic_e(j,t)R(i) caligraphic_which caligraphic_is caligraphic_the caligraphic_output caligraphic_of caligraphic_Eshare(⋅) caligraphic_to caligraphic_the caligraphic_nth caligraphic_dynamic caligraphic_layer caligraphic_features. caligraphic_e(j,t)R(i) caligraphic_means caligraphic_embedding caligraphic_vector caligraphic_generated caligraphic_by caligraphic_the caligraphic_click caligraphic_sequence caligraphic_at caligraphic_the caligraphic_moment caligraphic_t.The caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_treats caligraphic_the caligraphic_parameters caligraphic_of caligraphic_a caligraphic_fully-connected caligraphic_layer caligraphic_as caligraphic_a caligraphic_matrix caligraphic_K(n)∈RNin×Nout, caligraphic_where caligraphic_Nin caligraphic_and caligraphic_Nout caligraphic_represent caligraphic_the caligraphic_number caligraphic_of caligraphic_input caligraphic_neurons caligraphic_and caligraphic_output caligraphic_neurons caligraphic_of caligraphic_the caligraphic_nth caligraphic_fully-connected caligraphic_layers, caligraphic_respectively. caligraphic_Then caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_g(⋅) caligraphic_converts caligraphic_the caligraphic_real-time caligraphic_click caligraphic_sequence caligraphic_s(j,t)R(i) caligraphic_into caligraphic_dynamic caligraphic_layers caligraphic_parameters caligraphic_^Θgc caligraphic_by caligraphic_K(n)R(i)=g(n)(e(n)R(i)). caligraphic_Since caligraphic_the caligraphic_following caligraphic_content caligraphic_no caligraphic_longer caligraphic_needs caligraphic_the caligraphic_superscript caligraphic_(n), caligraphic_we caligraphic_simplify caligraphic_g(⋅) caligraphic_to caligraphic_g(⋅)=L(n)layer(Eshared(⋅)). caligraphic_Then, caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_updates caligraphic_the caligraphic_parameters caligraphic_and caligraphic_makes caligraphic_inference caligraphic_as caligraphic_follows,(5)Equation caligraphic_55^y(j,t)R(i)=frec(Ω(x(j,t)R(i);Θgb);^Θgc=g(s(j,t)R(i);Θp)).^y(j,t)R(i)=frec(Ω(x(j,t)R(i);Θgb);^Θgc=g(s(j,t)R(i);Θp)).Figure caligraphic_4Figure caligraphic_44Figure caligraphic_44Overview caligraphic_of caligraphic_the caligraphic_proposed caligraphic_Distribution caligraphic_Mapper. caligraphic_Training caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network, caligraphic_Posterior caligraphic_network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_Loss caligraphic_consists caligraphic_of caligraphic_the caligraphic_classification caligraphic_loss caligraphic_and caligraphic_the caligraphic_KL-Divergence caligraphic_loss. caligraphic_Inference caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_The caligraphic_uncertainty caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_output. caligraphic_Figure caligraphic_4Overview caligraphic_of caligraphic_the caligraphic_proposed caligraphic_Distribution caligraphic_Mapper. caligraphic_Training caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network, caligraphic_Posterior caligraphic_network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_Loss caligraphic_consists caligraphic_of caligraphic_the caligraphic_classification caligraphic_loss caligraphic_and caligraphic_the caligraphic_KL-Divergence caligraphic_loss. caligraphic_Inference caligraphic_procedure: caligraphic_The caligraphic_architecture caligraphic_includes caligraphic_Recommendation caligraphic_Network, caligraphic_Prior caligraphic_Network caligraphic_and caligraphic_Next-item caligraphic_Perdition caligraphic_Network. caligraphic_The caligraphic_uncertainty caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_output. caligraphic_In caligraphic_cloud caligraphic_training, caligraphic_all caligraphic_layers caligraphic_of caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_are caligraphic_optimized caligraphic_together caligraphic_with caligraphic_the caligraphic_static caligraphic_layers caligraphic_of caligraphic_the caligraphic_primary caligraphic_model caligraphic_that caligraphic_are caligraphic_conditioned caligraphic_on caligraphic_the caligraphic_global caligraphic_history caligraphic_data caligraphic_SH(i)={x(j)H(i),y(j)H(i)}j=1NH(i), caligraphic_instead caligraphic_of caligraphic_optimizing caligraphic_the caligraphic_static caligraphic_layers caligraphic_of caligraphic_the caligraphic_primary caligraphic_model caligraphic_first caligraphic_and caligraphic_then caligraphic_optimizing caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model. caligraphic_The caligraphic_cloud caligraphic_generator caligraphic_model caligraphic_loss caligraphic_function caligraphic_is caligraphic_defined caligraphic_as caligraphic_follows:(6)Equation caligraphic_66EC-CDR caligraphic_could caligraphic_improve caligraphic_the caligraphic_generalization caligraphic_ability caligraphic_of caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model. caligraphic_However, caligraphic_EC-CDR caligraphic_could caligraphic_not caligraphic_be caligraphic_easily caligraphic_deployed caligraphic_in caligraphic_a caligraphic_real-world caligraphic_environment caligraphic_due caligraphic_to caligraphic_the caligraphic_high caligraphic_request caligraphic_frequency caligraphic_and caligraphic_low caligraphic_communication caligraphic_revenue. caligraphic_Under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_the caligraphic_moment caligraphic_t caligraphic_in caligraphic_Eq. caligraphic_is caligraphic_equal caligraphic_to caligraphic_the caligraphic_current caligraphic_moment caligraphic_T, caligraphic_which caligraphic_means caligraphic_that caligraphic_the caligraphic_edge caligraphic_and caligraphic_the caligraphic_cloud caligraphic_communicate caligraphic_at caligraphic_every caligraphic_moment. caligraphic_In caligraphic_fact, caligraphic_however, caligraphic_a caligraphic_lot caligraphic_of caligraphic_communication caligraphic_is caligraphic_unnecessary caligraphic_because caligraphic_^Θgc caligraphic_generated caligraphic_by caligraphic_the caligraphic_sequence caligraphic_earlier caligraphic_may caligraphic_work caligraphic_well caligraphic_enough. caligraphic_To caligraphic_alleviate caligraphic_this caligraphic_issue, caligraphic_we caligraphic_propose caligraphic_MRD caligraphic_and caligraphic_DM caligraphic_to caligraphic_solve caligraphic_the caligraphic_problem caligraphic_when caligraphic_the caligraphic_edge caligraphic_recommendation caligraphic_model caligraphic_should caligraphic_update caligraphic_parameters.3.2.2subsubsection caligraphic_3.2.23.2.2§3.2.23.2.2Mis-Recommendation caligraphic_Detector3.2.2Mis-Recommendation caligraphic_DetectorThe caligraphic_training caligraphic_procedure caligraphic_of caligraphic_MRD caligraphic_can caligraphic_be caligraphic_divided caligraphic_into caligraphic_two caligraphic_stages. caligraphic_The caligraphic_goal caligraphic_of caligraphic_the caligraphic_first caligraphic_stage caligraphic_is caligraphic_to caligraphic_construct caligraphic_a caligraphic_MRD caligraphic_dataset caligraphic_SC caligraphic_based caligraphic_on caligraphic_the caligraphic_user's caligraphic_historical caligraphic_data caligraphic_without caligraphic_any caligraphic_additional caligraphic_annotation caligraphic_to caligraphic_train caligraphic_the caligraphic_MRD. caligraphic_The caligraphic_cloud caligraphic_model caligraphic_Mg caligraphic_and caligraphic_the caligraphic_edge caligraphic_model caligraphic_Md caligraphic_are caligraphic_trained caligraphic_in caligraphic_the caligraphic_same caligraphic_way caligraphic_as caligraphic_the caligraphic_training caligraphic_procedure caligraphic_of caligraphic_EC-CDR.(7)Equation caligraphic_77Here, caligraphic_we caligraphic_set caligraphic_t′≤t=T. caligraphic_That caligraphic_is, caligraphic_when caligraphic_generating caligraphic_model caligraphic_parameters, caligraphic_we caligraphic_use caligraphic_the caligraphic_click caligraphic_sequence caligraphic_s(j,t′)R(i) caligraphic_at caligraphic_the caligraphic_previous caligraphic_moment caligraphic_t′, caligraphic_but caligraphic_this caligraphic_model caligraphic_is caligraphic_used caligraphic_to caligraphic_predict caligraphic_the caligraphic_current caligraphic_data. caligraphic_Then caligraphic_we caligraphic_can caligraphic_get caligraphic_c(j,t,t′) caligraphic_that caligraphic_means caligraphic_whether caligraphic_the caligraphic_sample caligraphic_be caligraphic_correctly caligraphic_predicted caligraphic_based caligraphic_on caligraphic_the caligraphic_prediction caligraphic_^y(j,t,t′)R(i) caligraphic_and caligraphic_the caligraphic_ground-truth caligraphic_y(j,t)R(i).(8)Equation caligraphic_88c(j,t,t′)={1,^y(j,t,t′)R(i)=y(j,t)R(i);0,^y(j,t,t′)R(i)≠y(j,t)R(i).,c(j,t,t′)={1,^y(j,t,t′)R(i)=y(j,t)R(i);0,^y(j,t,t′)R(i)≠y(j,t)R(i).,(9)Equation caligraphic_99LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′);ΘMRD)).Then caligraphic_we caligraphic_construct caligraphic_the caligraphic_new caligraphic_mis-recommendation caligraphic_training caligraphic_dataset caligraphic_as caligraphic_follows: caligraphic_SMRD(i)={s(j,t),s(j,t′),c(j,t,t′)}0≤t′≤t=T. caligraphic_Then, caligraphic_a caligraphic_dynamic caligraphic_layers caligraphic_fMRD(⋅) caligraphic_can caligraphic_be caligraphic_trained caligraphic_on caligraphic_SMRD(i) caligraphic_according caligraphic_to caligraphic_the caligraphic_Eq. caligraphic_, caligraphic_where caligraphic_t=T caligraphic_and caligraphic_the caligraphic_loss caligraphic_function caligraphic_l(⋅) caligraphic_is caligraphic_cross caligraphic_entropy.3.2.3subsubsection caligraphic_3.2.33.2.3§3.2.33.2.3Distribution caligraphic_Mapper3.2.3Distribution caligraphic_MapperAlthough caligraphic_the caligraphic_MRD caligraphic_could caligraphic_determine caligraphic_when caligraphic_to caligraphic_update caligraphic_edge caligraphic_parameters, caligraphic_it caligraphic_is caligraphic_insufficient caligraphic_to caligraphic_simply caligraphic_map caligraphic_a caligraphic_click caligraphic_sequence caligraphic_to caligraphic_a caligraphic_certain caligraphic_representation caligraphic_in caligraphic_a caligraphic_high-dimensional caligraphic_space caligraphic_due caligraphic_to caligraphic_ubiquitous caligraphic_noises caligraphic_in caligraphic_click caligraphic_sequences. caligraphic_So caligraphic_we caligraphic_design caligraphic_the caligraphic_DM caligraphic_as caligraphic_Figure caligraphic_make caligraphic_it caligraphic_possible caligraphic_to caligraphic_directly caligraphic_perceive caligraphic_the caligraphic_data caligraphic_distribution caligraphic_shift caligraphic_and caligraphic_determine caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_the caligraphic_semantics caligraphic_of caligraphic_the caligraphic_data.Inspired caligraphic_by caligraphic_Conditional-VAE, caligraphic_we caligraphic_map caligraphic_click caligraphic_sequences caligraphic_to caligraphic_normal caligraphic_distributions. caligraphic_Different caligraphic_from caligraphic_the caligraphic_MRD, caligraphic_the caligraphic_DM caligraphic_module caligraphic_consider caligraphic_a caligraphic_variable caligraphic_u(j,t) caligraphic_to caligraphic_denote caligraphic_the caligraphic_uncertainty caligraphic_in caligraphic_Eq. caligraphic_as:(10)Equation caligraphic_1010LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).LMRD=∑j=1|SMRD(i)|∑t′=1Tl(yj,^y=fMRD(s(j,t),s(j,t′),u(j,t);ΘMRD)).The caligraphic_uncertainty caligraphic_variable caligraphic_u(j,t) caligraphic_shows caligraphic_the caligraphic_recommendation caligraphic_model's caligraphic_understanding caligraphic_of caligraphic_the caligraphic_semantics caligraphic_of caligraphic_the caligraphic_data. caligraphic_DM caligraphic_focuses caligraphic_on caligraphic_how caligraphic_to caligraphic_learn caligraphic_such caligraphic_uncertainty caligraphic_variable caligraphic_u(j,t).Distribution caligraphic_Mapper caligraphic_consists caligraphic_of caligraphic_three caligraphic_components caligraphic_as caligraphic_shown caligraphic_in caligraphic_the caligraphic_figure caligraphic_in caligraphic_Appendix, caligraphic_namely caligraphic_the caligraphic_Prior caligraphic_Network caligraphic_P(⋅) caligraphic_(PRN), caligraphic_the caligraphic_Posterior caligraphic_Network caligraphic_Q(⋅) caligraphic_(PON), caligraphic_and caligraphic_the caligraphic_Next-item caligraphic_Prediction caligraphic_Network caligraphic_f(⋅) caligraphic_(NPN) caligraphic_that caligraphic_includes caligraphic_the caligraphic_static caligraphic_layers caligraphic_Ω(⋅) caligraphic_and caligraphic_dynamic caligraphic_layers caligraphic_fNPN(⋅). caligraphic_Note caligraphic_that caligraphic_Ω(⋅) caligraphic_here caligraphic_is caligraphic_the caligraphic_same caligraphic_as caligraphic_Ω(⋅) caligraphic_in caligraphic_section caligraphic_and caligraphic_, caligraphic_so caligraphic_there caligraphic_is caligraphic_almost caligraphic_no caligraphic_additional caligraphic_resource caligraphic_consumption. caligraphic_We caligraphic_will caligraphic_first caligraphic_introduce caligraphic_the caligraphic_three caligraphic_components caligraphic_separately, caligraphic_and caligraphic_then caligraphic_introduce caligraphic_the caligraphic_training caligraphic_procedure caligraphic_and caligraphic_inference caligraphic_procedure.Prior caligraphic_Network. caligraphic_The caligraphic_Prior caligraphic_Network caligraphic_with caligraphic_weights caligraphic_Θprior caligraphic_and caligraphic_Θ′prior caligraphic_maps caligraphic_the caligraphic_representation caligraphic_of caligraphic_a caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_to caligraphic_a caligraphic_prior caligraphic_probability caligraphic_distribution. caligraphic_We caligraphic_set caligraphic_this caligraphic_prior caligraphic_probability caligraphic_distribution caligraphic_as caligraphic_a caligraphic_normal caligraphic_distribution caligraphic_with caligraphic_mean caligraphic_μprior(j,t)=Ωprior(s(j,t);Θprior)∈RN caligraphic_and caligraphic_variance caligraphic_σprior(j,t)=Ω′prior(s(j,t);Θ′prior)∈RN.(11)Equation caligraphic_1111z(j,t)∼P(⋅|s(j,t))=N(μprior(j,t),σprior(j,t)).Posterior caligraphic_Network. caligraphic_The caligraphic_Posterior caligraphic_Network caligraphic_Ωpost caligraphic_with caligraphic_weights caligraphic_Θpost caligraphic_and caligraphic_Θ′post caligraphic_can caligraphic_enhance caligraphic_the caligraphic_training caligraphic_of caligraphic_the caligraphic_Prior caligraphic_Network caligraphic_by caligraphic_introducing caligraphic_posterior caligraphic_information. caligraphic_It caligraphic_maps caligraphic_the caligraphic_representation caligraphic_concatenated caligraphic_by caligraphic_the caligraphic_representation caligraphic_of caligraphic_the caligraphic_next-item caligraphic_r(j,t) caligraphic_and caligraphic_of caligraphic_the caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_to caligraphic_a caligraphic_normal caligraphic_distribution. caligraphic_we caligraphic_define caligraphic_the caligraphic_posterior caligraphic_probability caligraphic_distribution caligraphic_as caligraphic_a caligraphic_normal caligraphic_distribution caligraphic_with caligraphic_mean caligraphic_μpost(j,t)=Ωpost(s(j,t);Θpost)∈RN caligraphic_and caligraphic_variance caligraphic_σpost(j,t)=Ω′post(s(j,t);Θ′post)∈RN.(12)Equation caligraphic_1212z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).z(j,t)∼Q(⋅|s(j,t),r(j,t))=N(μpost(j,t),σpost(j,t)).Next-item caligraphic_Prediction caligraphic_Network. caligraphic_The caligraphic_Next-item caligraphic_Prediction caligraphic_Network caligraphic_with caligraphic_weights caligraphic_Θc caligraphic_predicts caligraphic_the caligraphic_embedding caligraphic_of caligraphic_the caligraphic_next caligraphic_item caligraphic_^r(j,t) caligraphic_to caligraphic_be caligraphic_clicked caligraphic_based caligraphic_on caligraphic_the caligraphic_user's caligraphic_click caligraphic_sequence caligraphic_s(j,t) caligraphic_as caligraphic_follows,(13)Equation caligraphic_1313^r(j,t)=fc(e(j,t)=Ω(s(j,t);Θb),z(j,t);Θc),^r(j,t)=fc(e(j,t)=Ω(s(j,t);Θb),z(j,t);Θc),^y(j,t)=frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).^y(j,t)=frec(Ω(x(j,t);Θgb),^r(j,t);g(e(j,t);Θp)).Training caligraphic_Procedure. caligraphic_In caligraphic_the caligraphic_training caligraphic_procedure, caligraphic_two caligraphic_losses caligraphic_need caligraphic_to caligraphic_be caligraphic_constructed, caligraphic_one caligraphic_is caligraphic_recommendation caligraphic_prediction caligraphic_loss caligraphic_Lrec caligraphic_and caligraphic_the caligraphic_other caligraphic_is caligraphic_distribution caligraphic_difference caligraphic_loss caligraphic_Ldist. caligraphic_Like caligraphic_the caligraphic_way caligraphic_that caligraphic_most caligraphic_recommendation caligraphic_models caligraphic_are caligraphic_trained, caligraphic_Lrec caligraphic_uses caligraphic_the caligraphic_binary caligraphic_cross-entropy caligraphic_loss caligraphic_function caligraphic_l(⋅) caligraphic_to caligraphic_penalize caligraphic_the caligraphic_difference caligraphic_between caligraphic_^y(j,t) caligraphic_and caligraphic_y(j,t). caligraphic_The caligraphic_difference caligraphic_is caligraphic_that caligraphic_here caligraphic_NPN caligraphic_uses caligraphic_the caligraphic_feature caligraphic_z caligraphic_sampled caligraphic_from caligraphic_the caligraphic_prior caligraphic_distribution caligraphic_Q caligraphic_to caligraphic_replace caligraphic_e caligraphic_in caligraphic_formula caligraphic_5 caligraphic_In caligraphic_addition, caligraphic_Ldist caligraphic_penalizes caligraphic_the caligraphic_difference caligraphic_between caligraphic_the caligraphic_posterior caligraphic_distribution caligraphic_Q caligraphic_and caligraphic_the caligraphic_prior caligraphic_distribution caligraphic_P caligraphic_with caligraphic_the caligraphic_help caligraphic_of caligraphic_the caligraphic_Kullback-Leibler caligraphic_divergence. caligraphic_Ldist caligraphic_"pulls" caligraphic_the caligraphic_posterior caligraphic_and caligraphic_prior caligraphic_distributions caligraphic_towards caligraphic_each caligraphic_other. caligraphic_The caligraphic_formulas caligraphic_for caligraphic_Lrec caligraphic_and caligraphic_Ldist caligraphic_are caligraphic_as caligraphic_follows,(14)Equation caligraphic_1414Lrec=Ez∼Q(⋅|s(j,t),y(j,t))[l(y(j,t)|^y(j,t))],Lrec=Ez∼Q(⋅|s(j,t),y(j,t))[l(y(j,t)|^y(j,t))],(15)Equation caligraphic_1515Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Ldist=DKL(Q(z|s(j,t),y(j,t))||P(z|s(j,t))).Finally, caligraphic_we caligraphic_optimize caligraphic_DM caligraphic_according caligraphic_to,(16)Equation caligraphic_1616L(y(j,t),s(j,t))=Lrec+β⋅Ldist.L(y(j,t),s(j,t))=Lrec+β⋅Ldist.During caligraphic_training, caligraphic_the caligraphic_weights caligraphic_are caligraphic_randomly caligraphic_initialized.Inference caligraphic_Procedure. caligraphic_In caligraphic_the caligraphic_inference caligraphic_procedure, caligraphic_the caligraphic_posterior caligraphic_network caligraphic_will caligraphic_be caligraphic_removed caligraphic_from caligraphic_DM caligraphic_because caligraphic_there caligraphic_is caligraphic_no caligraphic_posterior caligraphic_information caligraphic_during caligraphic_the caligraphic_inference caligraphic_procedure. caligraphic_Uncertainty caligraphic_variable caligraphic_u(j,t) caligraphic_is caligraphic_calculated caligraphic_by caligraphic_the caligraphic_multi-sampling caligraphic_outputs caligraphic_as caligraphic_follows:(17)Equation caligraphic_1717u(j,t)=var(^ri=fc(Ω(s(j,t);Θb),z(j,t)1∼n;Θc)),u(j,t)=var(^ri=fc(Ω(s(j,t);Θb),z(j,t)1∼n;Θc)),where caligraphic_n caligraphic_denotes caligraphic_the caligraphic_sampling caligraphic_times. caligraphic_Specifically, caligraphic_we caligraphic_consider caligraphic_the caligraphic_dimension caligraphic_of caligraphic_^r(j,t) caligraphic_is caligraphic_N×1, caligraphic_^ri(j,t),(k) caligraphic_as caligraphic_the caligraphic_k-th caligraphic_value caligraphic_of caligraphic_the caligraphic_^ri(j,t) caligraphic_vector, caligraphic_and caligraphic_calculate caligraphic_the caligraphic_variance caligraphic_as caligraphic_follows:(18)Equation caligraphic_1818var(^ri)=∑k=1Nvar^r1∼n(j,t),(k).var(^ri)=∑k=1Nvar^r1∼n(j,t),(k).3.2.4subsubsection caligraphic_3.2.43.2.4§3.2.43.2.4On-edge caligraphic_Model caligraphic_Update3.2.4On-edge caligraphic_Model caligraphic_UpdateMis-Recommendation caligraphic_Score caligraphic_(MRS) caligraphic_is caligraphic_a caligraphic_variable caligraphic_calculated caligraphic_based caligraphic_on caligraphic_the caligraphic_output caligraphic_of caligraphic_MRD caligraphic_and caligraphic_DM, caligraphic_which caligraphic_directly caligraphic_affects caligraphic_whether caligraphic_the caligraphic_model caligraphic_needs caligraphic_to caligraphic_be caligraphic_updated.(19)Equation caligraphic_1919MRS=1-fMRD(s(j,t),s(j,t′);ΘMRD)MRS=1-fMRD(s(j,t),s(j,t′);ΘMRD)(20)Equation caligraphic_2020Update=1(MRS≤Threshold)Update=1(MRS≤Threshold)In caligraphic_the caligraphic_equation caligraphic_above, caligraphic_1(⋅) caligraphic_is caligraphic_the caligraphic_indicator caligraphic_function. caligraphic_To caligraphic_get caligraphic_the caligraphic_threshold, caligraphic_we caligraphic_need caligraphic_to caligraphic_collect caligraphic_user caligraphic_data caligraphic_for caligraphic_a caligraphic_period caligraphic_of caligraphic_time, caligraphic_then caligraphic_get caligraphic_the caligraphic_MRS caligraphic_values caligraphic_corresponding caligraphic_to caligraphic_these caligraphic_data caligraphic_on caligraphic_the caligraphic_cloud caligraphic_and caligraphic_sort caligraphic_them, caligraphic_and caligraphic_then caligraphic_set caligraphic_the caligraphic_threshold caligraphic_according caligraphic_to caligraphic_the caligraphic_load caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_For caligraphic_example, caligraphic_if caligraphic_the caligraphic_load caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server caligraphic_needs caligraphic_to caligraphic_be caligraphic_reduced caligraphic_by caligraphic_90%, caligraphic_that caligraphic_is, caligraphic_when caligraphic_the caligraphic_load caligraphic_is caligraphic_only caligraphic_10% caligraphic_of caligraphic_the caligraphic_previous caligraphic_value, caligraphic_only caligraphic_the caligraphic_minimum caligraphic_10% caligraphic_position caligraphic_value caligraphic_needs caligraphic_to caligraphic_be caligraphic_sent caligraphic_to caligraphic_each caligraphic_edge caligraphic_as caligraphic_the caligraphic_threshold. caligraphic_During caligraphic_inference, caligraphic_each caligraphic_edge caligraphic_determines caligraphic_whether caligraphic_it caligraphic_needs caligraphic_to caligraphic_update caligraphic_the caligraphic_edge caligraphic_model caligraphic_based caligraphic_on caligraphic_equation caligraphic_and caligraphic_, caligraphic_that caligraphic_is, caligraphic_whether caligraphic_it caligraphic_needs caligraphic_to caligraphic_request caligraphic_new caligraphic_parameters.4section caligraphic_44§44Experiments4ExperimentsWe caligraphic_conducted caligraphic_extensive caligraphic_experiments caligraphic_to caligraphic_evaluate caligraphic_the caligraphic_effectiveness caligraphic_and caligraphic_generalizability caligraphic_of caligraphic_the caligraphic_proposed caligraphic_IntellectReq. caligraphic_We caligraphic_put caligraphic_part caligraphic_of caligraphic_the caligraphic_experimental caligraphic_setup, caligraphic_results caligraphic_and caligraphic_analysis caligraphic_in caligraphic_the caligraphic_Appendix.4.1subsection caligraphic_4.14.1§4.14.1Experimental caligraphic_Setup.4.1Experimental caligraphic_Setup.Datasets. caligraphic_We caligraphic_evaluate caligraphic_on caligraphic_Amazon caligraphic_CDs caligraphic_(CDs), caligraphic_Amazon caligraphic_Electronic caligraphic_(Electronic), caligraphic_Douban caligraphic_Book caligraphic_(Book), caligraphic_three caligraphic_widely caligraphic_used caligraphic_public caligraphic_benchmarks caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_tasks.Evaluation caligraphic_Metrics caligraphic_In caligraphic_the caligraphic_experiments, caligraphic_we caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC caligraphic_1footnote caligraphic_11footnote caligraphic_1Note caligraphic_0.1% caligraphic_absolute caligraphic_AUC caligraphic_gain caligraphic_is caligraphic_regarded caligraphic_as caligraphic_significant caligraphic_for caligraphic_the caligraphic_CTR caligraphic_task caligraphic_(, caligraphic_), caligraphic_UAUC, caligraphic_HitRate caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics.Baselines. caligraphic_To caligraphic_verify caligraphic_the caligraphic_applicability, caligraphic_the caligraphic_following caligraphic_representative caligraphic_sequential caligraphic_modeling caligraphic_approaches caligraphic_are caligraphic_implemented caligraphic_and caligraphic_compared caligraphic_with caligraphic_the caligraphic_counterparts caligraphic_combined caligraphic_with caligraphic_the caligraphic_proposed caligraphic_method. caligraphic_DUET caligraphic_(, caligraphic_) caligraphic_and caligraphic_APG caligraphic_(, caligraphic_) caligraphic_are caligraphic_SOTA caligraphic_of caligraphic_EC-CDR, caligraphic_which caligraphic_generate caligraphic_parameters caligraphic_through caligraphic_the caligraphic_edge-cloud caligraphic_collaboration caligraphic_for caligraphic_different caligraphic_tasks. caligraphic_With caligraphic_the caligraphic_cloud caligraphic_generator caligraphic_model, caligraphic_the caligraphic_on-edge caligraphic_model caligraphic_could caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution caligraphic_in caligraphic_each caligraphic_session caligraphic_without caligraphic_training caligraphic_on caligraphic_the caligraphic_edge. caligraphic_GRU4Rec caligraphic_(, caligraphic_), caligraphic_DIN caligraphic_(, caligraphic_), caligraphic_and caligraphic_SASRec caligraphic_(, caligraphic_) caligraphic_are caligraphic_three caligraphic_of caligraphic_the caligraphic_most caligraphic_widely caligraphic_used caligraphic_sequential caligraphic_recommendation caligraphic_methods caligraphic_in caligraphic_the caligraphic_academia caligraphic_and caligraphic_industry, caligraphic_which caligraphic_respectively caligraphic_introduce caligraphic_GRU, caligraphic_Attention, caligraphic_and caligraphic_Self-Attention caligraphic_into caligraphic_the caligraphic_recommendation caligraphic_system. caligraphic_LOF caligraphic_(, caligraphic_) caligraphic_and caligraphic_OC-SVM caligraphic_(, caligraphic_) caligraphic_estimate caligraphic_the caligraphic_density caligraphic_of caligraphic_a caligraphic_given caligraphic_point caligraphic_via caligraphic_the caligraphic_ratio caligraphic_of caligraphic_the caligraphic_local caligraphic_reachability caligraphic_of caligraphic_its caligraphic_neighbors caligraphic_and caligraphic_itself. caligraphic_They caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_detect caligraphic_changes caligraphic_in caligraphic_the caligraphic_distribution caligraphic_of caligraphic_click caligraphic_sequences. caligraphic_For caligraphic_the caligraphic_IntellectReq, caligraphic_we caligraphic_consider caligraphic_SASRec caligraphic_as caligraphic_edge-model caligraphic_unless caligraphic_otherwise caligraphic_stated, caligraphic_but caligraphic_note caligraphic_that caligraphic_IntellectReq caligraphic_broadly caligraphic_applies caligraphic_to caligraphic_lots caligraphic_of caligraphic_sequential caligraphic_recommendation caligraphic_model caligraphic_such caligraphic_as caligraphic_DIN, caligraphic_GRU4Rec, caligraphic_etc.Evaluation caligraphic_Metrics. caligraphic_We caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC, caligraphic_HitRate, caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics caligraphic_to caligraphic_evaluate caligraphic_model caligraphic_performance.4.2subsection caligraphic_4.24.2§4.24.2Experimental caligraphic_Results.4.2Experimental caligraphic_Results.4.2.1subsubsection caligraphic_4.2.14.2.1§4.2.14.2.1Quantitative caligraphic_Results.4.2.1Quantitative caligraphic_Results.Figure caligraphic_5Figure caligraphic_55Figure caligraphic_55Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_curve caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_5Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_curve caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_6Figure caligraphic_66Figure caligraphic_66Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_6Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_previous caligraphic_1 caligraphic_time caligraphic_difference caligraphic_on-edge caligraphic_dynamic caligraphic_model.Figure caligraphic_7Figure caligraphic_77Figure caligraphic_77Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_on-edge caligraphic_static caligraphic_model.Figure caligraphic_7Performance caligraphic_w.r.t. caligraphic_Request caligraphic_Frequency caligraphic_based caligraphic_on caligraphic_on-edge caligraphic_static caligraphic_model.Figure caligraphic_, caligraphic_, caligraphic_and caligraphic_summarize caligraphic_the caligraphic_quantitative caligraphic_results caligraphic_of caligraphic_our caligraphic_framework caligraphic_and caligraphic_other caligraphic_methods caligraphic_on caligraphic_CDs caligraphic_and caligraphic_Electronic caligraphic_datasets. caligraphic_The caligraphic_experiments caligraphic_are caligraphic_based caligraphic_on caligraphic_state-of-the-art caligraphic_EC-CDR caligraphic_frameworks caligraphic_such caligraphic_as caligraphic_DUET caligraphic_and caligraphic_APG. caligraphic_As caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_-, caligraphic_we caligraphic_combine caligraphic_the caligraphic_parameter caligraphic_generation caligraphic_framework caligraphic_with caligraphic_three caligraphic_sequential caligraphic_recommendation caligraphic_models, caligraphic_DIN, caligraphic_GRU4Rec, caligraphic_SASRec. caligraphic_We caligraphic_evaluate caligraphic_these caligraphic_methods caligraphic_with caligraphic_AUC caligraphic_and caligraphic_UAUC caligraphic_metrics caligraphic_on caligraphic_CDs caligraphic_and caligraphic_Book caligraphic_datasets. caligraphic_We caligraphic_have caligraphic_the caligraphic_following caligraphic_findings: caligraphic_(1) caligraphic_If caligraphic_all caligraphic_edge-model caligraphic_updated caligraphic_at caligraphic_t-1 caligraphic_moment, caligraphic_the caligraphic_DUET caligraphic_framework caligraphic_(DUET) caligraphic_and caligraphic_the caligraphic_APG caligraphic_framework caligraphic_(APG) caligraphic_can caligraphic_be caligraphic_viewed caligraphic_as caligraphic_the caligraphic_upper caligraphic_bound caligraphic_of caligraphic_performance caligraphic_for caligraphic_all caligraphic_methods caligraphic_since caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_are caligraphic_evaluated caligraphic_with caligraphic_fixed caligraphic_100% caligraphic_request caligraphic_frequency caligraphic_and caligraphic_other caligraphic_methods caligraphic_are caligraphic_evaluated caligraphic_with caligraphic_increasing caligraphic_frequency. caligraphic_If caligraphic_all caligraphic_edge-model caligraphic_are caligraphic_the caligraphic_same caligraphic_as caligraphic_the caligraphic_cloud caligraphic_pretrained caligraphic_model, caligraphic_IntellectReq caligraphic_can caligraphic_even caligraphic_beat caligraphic_DUET, caligraphic_which caligraphic_indicates caligraphic_that caligraphic_in caligraphic_EC-CDR, caligraphic_not caligraphic_all caligraphic_edges caligraphic_need caligraphic_to caligraphic_be caligraphic_updated caligraphic_at caligraphic_every caligraphic_moment. caligraphic_In caligraphic_fact, caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_by caligraphic_user caligraphic_data caligraphic_at caligraphic_some caligraphic_moments caligraphic_can caligraphic_be caligraphic_detrimental caligraphic_to caligraphic_performance. caligraphic_Note caligraphic_that caligraphic_directly caligraphic_comparing caligraphic_the caligraphic_other caligraphic_methods caligraphic_with caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_is caligraphic_not caligraphic_fair caligraphic_as caligraphic_DUET caligraphic_and caligraphic_APG caligraphic_use caligraphic_the caligraphic_fixed caligraphic_100% caligraphic_request caligraphic_frequency, caligraphic_which caligraphic_could caligraphic_not caligraphic_be caligraphic_deployed caligraphic_in caligraphic_lower caligraphic_request caligraphic_frequency. caligraphic_(2) caligraphic_The caligraphic_random caligraphic_request caligraphic_method caligraphic_(DUET caligraphic_(Random), caligraphic_APG caligraphic_(Random)) caligraphic_works caligraphic_well caligraphic_with caligraphic_any caligraphic_request caligraphic_budget. caligraphic_However, caligraphic_it caligraphic_does caligraphic_not caligraphic_give caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_for caligraphic_any caligraphic_request caligraphic_budget caligraphic_in caligraphic_most caligraphic_cases caligraphic_(such caligraphic_as caligraphic_Row.1). caligraphic_The caligraphic_correlation caligraphic_between caligraphic_its caligraphic_performance caligraphic_and caligraphic_Request caligraphic_Frequency caligraphic_tends caligraphic_to caligraphic_be caligraphic_linear. caligraphic_The caligraphic_performances caligraphic_of caligraphic_random caligraphic_request caligraphic_methods caligraphic_are caligraphic_unstable caligraphic_and caligraphic_unpredictable, caligraphic_where caligraphic_these caligraphic_methods caligraphic_outperform caligraphic_other caligraphic_methods caligraphic_in caligraphic_a caligraphic_few caligraphic_cases. caligraphic_(3) caligraphic_LOF caligraphic_(DUET caligraphic_(LOF), caligraphic_APG caligraphic_(LOF)) caligraphic_and caligraphic_OC-SVM caligraphic_(DUET caligraphic_(OC-SVM), caligraphic_APG caligraphic_(OC-SVM)) caligraphic_are caligraphic_two caligraphic_methods caligraphic_that caligraphic_could caligraphic_be caligraphic_used caligraphic_as caligraphic_simple caligraphic_baselines caligraphic_to caligraphic_make caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_under caligraphic_a caligraphic_special caligraphic_and caligraphic_specific caligraphic_request caligraphic_budget. caligraphic_However, caligraphic_they caligraphic_have caligraphic_two caligraphic_weaknesses. caligraphic_One caligraphic_is caligraphic_that caligraphic_they caligraphic_consume caligraphic_a caligraphic_lot caligraphic_of caligraphic_resources caligraphic_and caligraphic_thus caligraphic_significantly caligraphic_reduce caligraphic_the caligraphic_calculation caligraphic_speed. caligraphic_The caligraphic_other caligraphic_is caligraphic_they caligraphic_can caligraphic_only caligraphic_work caligraphic_under caligraphic_a caligraphic_specific caligraphic_request caligraphic_budget caligraphic_instead caligraphic_of caligraphic_an caligraphic_arbitrary caligraphic_request caligraphic_budget. caligraphic_For caligraphic_example, caligraphic_in caligraphic_the caligraphic_first caligraphic_line, caligraphic_the caligraphic_Request caligraphic_Frequency caligraphic_of caligraphic_OC-SVM caligraphic_can caligraphic_only caligraphic_be caligraphic_(4) caligraphic_In caligraphic_most caligraphic_cases, caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_make caligraphic_the caligraphic_optimal caligraphic_request caligraphic_scheme caligraphic_under caligraphic_any caligraphic_request caligraphic_budget.4.2.2subsubsection caligraphic_4.2.24.2.2§4.2.24.2.2Mis-recommendation caligraphic_score caligraphic_and caligraphic_profit.4.2.2Mis-recommendation caligraphic_score caligraphic_and caligraphic_profit.Figure caligraphic_8Figure caligraphic_88Figure caligraphic_88Mis-Recommendation caligraphic_Score caligraphic_and caligraphic_Revenue.Figure caligraphic_8Mis-Recommendation caligraphic_Score caligraphic_and caligraphic_Revenue.To caligraphic_further caligraphic_study caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_MDR, caligraphic_we caligraphic_visualize caligraphic_the caligraphic_request caligraphic_timing caligraphic_and caligraphic_revenue caligraphic_in caligraphic_Figure caligraphic_. caligraphic_As caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_, caligraphic_we caligraphic_analyze caligraphic_the caligraphic_relationship caligraphic_between caligraphic_request caligraphic_and caligraphic_revenue. caligraphic_Every caligraphic_100 caligraphic_users caligraphic_were caligraphic_assigned caligraphic_to caligraphic_one caligraphic_of caligraphic_15 caligraphic_groups, caligraphic_which caligraphic_were caligraphic_selected caligraphic_at caligraphic_random. caligraphic_The caligraphic_Figure caligraphic_is caligraphic_divided caligraphic_into caligraphic_three caligraphic_parts, caligraphic_with caligraphic_the caligraphic_first caligraphic_part caligraphic_used caligraphic_to caligraphic_assess caligraphic_the caligraphic_request caligraphic_and caligraphic_the caligraphic_second caligraphic_and caligraphic_third caligraphic_parts caligraphic_used caligraphic_to caligraphic_assess caligraphic_the caligraphic_benefit. caligraphic_The caligraphic_metric caligraphic_used caligraphic_here caligraphic_is caligraphic_Mis-Recommendation caligraphic_Score caligraphic_(MRS) caligraphic_to caligraphic_evaluate caligraphic_the caligraphic_request caligraphic_revenue. caligraphic_MRS caligraphic_is caligraphic_a caligraphic_metric caligraphic_to caligraphic_measure caligraphic_whether caligraphic_a caligraphic_recommendation caligraphic_will caligraphic_be caligraphic_made caligraphic_in caligraphic_error. caligraphic_In caligraphic_other caligraphic_words, caligraphic_it caligraphic_can caligraphic_be caligraphic_viewed caligraphic_as caligraphic_an caligraphic_evaluation caligraphic_of caligraphic_the caligraphic_model's caligraphic_generalization caligraphic_ability. caligraphic_The caligraphic_probabilities caligraphic_of caligraphic_a caligraphic_mis-recommendation caligraphic_and caligraphic_requesting caligraphic_model caligraphic_parameters caligraphic_are caligraphic_higher caligraphic_and caligraphic_the caligraphic_score caligraphic_is caligraphic_lower.•item caligraphic_1st caligraphic_itemIntellectReq caligraphic_predicts caligraphic_the caligraphic_MRS caligraphic_based caligraphic_on caligraphic_the caligraphic_uncertainty caligraphic_and caligraphic_the caligraphic_click caligraphic_sequences caligraphic_at caligraphic_the caligraphic_moment caligraphic_t caligraphic_and caligraphic_t-1.•item caligraphic_2nd caligraphic_itemDUET caligraphic_(Random) caligraphic_randomly caligraphic_selects caligraphic_edges caligraphic_to caligraphic_request caligraphic_the caligraphic_cloud caligraphic_model caligraphic_to caligraphic_update caligraphic_the caligraphic_parameters caligraphic_of caligraphic_the caligraphic_edges. caligraphic_At caligraphic_this caligraphic_point, caligraphic_MRS caligraphic_can caligraphic_be caligraphic_considered caligraphic_as caligraphic_an caligraphic_arbitrary caligraphic_constant. caligraphic_We caligraphic_take caligraphic_the caligraphic_average caligraphic_value caligraphic_of caligraphic_IntellectReq's caligraphic_MRS caligraphic_as caligraphic_the caligraphic_MRS caligraphic_value.•item caligraphic_3rd caligraphic_itemDUET caligraphic_(w. caligraphic_Request) caligraphic_represents caligraphic_all caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_the caligraphic_moment caligraphic_t.•item caligraphic_4th caligraphic_itemDUET caligraphic_(w/o. caligraphic_Request) caligraphic_represents caligraphic_no caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_moment caligraphic_t-1 caligraphic_in caligraphic_Figure caligraphic_and caligraphic_, caligraphic_represents caligraphic_no caligraphic_edge-model caligraphic_be caligraphic_updated caligraphic_at caligraphic_moment caligraphic_0 caligraphic_in caligraphic_Figure caligraphic_.•item caligraphic_5th caligraphic_itemRequest caligraphic_Revenue caligraphic_represents caligraphic_the caligraphic_revenue, caligraphic_that caligraphic_is, caligraphic_DUET caligraphic_(w. caligraphic_Request) caligraphic_curve caligraphic_minus caligraphic_DUET caligraphic_(w/o caligraphic_Request).From caligraphic_Figure caligraphic_, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_observations: caligraphic_(1) caligraphic_The caligraphic_trends caligraphic_of caligraphic_MRS caligraphic_and caligraphic_DUET caligraphic_Revenue caligraphic_are caligraphic_typically caligraphic_in caligraphic_the caligraphic_opposite caligraphic_direction, caligraphic_which caligraphic_means caligraphic_that caligraphic_when caligraphic_the caligraphic_MRS caligraphic_value caligraphic_is caligraphic_low, caligraphic_IntellectReq caligraphic_tends caligraphic_to caligraphic_believe caligraphic_that caligraphic_the caligraphic_edge's caligraphic_model caligraphic_cannot caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution. caligraphic_Then, caligraphic_the caligraphic_IntellectReq caligraphic_uses caligraphic_the caligraphic_most caligraphic_recent caligraphic_real-time caligraphic_data caligraphic_to caligraphic_request caligraphic_model caligraphic_parameters. caligraphic_As caligraphic_a caligraphic_result, caligraphic_the caligraphic_revenue caligraphic_at caligraphic_this caligraphic_time caligraphic_is caligraphic_frequently caligraphic_positive caligraphic_and caligraphic_relatively caligraphic_high. caligraphic_When caligraphic_the caligraphic_MRS caligraphic_value caligraphic_is caligraphic_high, caligraphic_IntellectReq caligraphic_tends caligraphic_to caligraphic_continue caligraphic_using caligraphic_the caligraphic_model caligraphic_that caligraphic_was caligraphic_updated caligraphic_at caligraphic_the caligraphic_previous caligraphic_moment caligraphic_t-1 caligraphic_instead caligraphic_of caligraphic_t caligraphic_because caligraphic_it caligraphic_believes caligraphic_that caligraphic_the caligraphic_model caligraphic_on caligraphic_the caligraphic_edge caligraphic_can caligraphic_generalize caligraphic_well caligraphic_to caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution. caligraphic_The caligraphic_revenue caligraphic_is caligraphic_frequently caligraphic_low caligraphic_and caligraphic_negative caligraphic_if caligraphic_the caligraphic_model caligraphic_parameters caligraphic_are caligraphic_requested caligraphic_at caligraphic_this caligraphic_point. caligraphic_(2) caligraphic_Since caligraphic_the caligraphic_MRS caligraphic_of caligraphic_DUET caligraphic_(Random) caligraphic_is caligraphic_constant, caligraphic_it caligraphic_cannot caligraphic_predict caligraphic_the caligraphic_revenue caligraphic_of caligraphic_each caligraphic_request. caligraphic_The caligraphic_performance caligraphic_curve caligraphic_changes caligraphic_randomly caligraphic_because caligraphic_of caligraphic_the caligraphic_irregular caligraphic_arrangement caligraphic_order caligraphic_of caligraphic_groups.4.2.3subsubsection caligraphic_4.2.34.2.3§4.2.34.2.3Ablation caligraphic_Study.4.2.3Ablation caligraphic_Study.Figure caligraphic_9Figure caligraphic_99Figure caligraphic_99Ablation caligraphic_study caligraphic_on caligraphic_model caligraphic_architecture.Figure caligraphic_9Ablation caligraphic_study caligraphic_on caligraphic_model caligraphic_architecture.We caligraphic_conducted caligraphic_an caligraphic_ablation caligraphic_study caligraphic_to caligraphic_show caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_different caligraphic_components caligraphic_in caligraphic_IntellectReq. caligraphic_The caligraphic_results caligraphic_are caligraphic_shown caligraphic_in caligraphic_Figure caligraphic_. caligraphic_We caligraphic_use caligraphic_w/o. caligraphic_and caligraphic_w. caligraphic_to caligraphic_denote caligraphic_without caligraphic_and caligraphic_with, caligraphic_respectively. caligraphic_From caligraphic_the caligraphic_table, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_findings:•item caligraphic_1st caligraphic_itemIntellectReq caligraphic_means caligraphic_both caligraphic_DM caligraphic_and caligraphic_MRD caligraphic_are caligraphic_used.•item caligraphic_2nd caligraphic_item(w/o. caligraphic_DM) caligraphic_means caligraphic_MRD caligraphic_is caligraphic_used caligraphic_but caligraphic_DM caligraphic_is caligraphic_not caligraphic_used.•item caligraphic_3rd caligraphic_item(w/o. caligraphic_MRD) caligraphic_means caligraphic_DM caligraphic_is caligraphic_used caligraphic_but caligraphic_MRD caligraphic_is caligraphic_not caligraphic_used.From caligraphic_the caligraphic_figure caligraphic_and caligraphic_table, caligraphic_we caligraphic_have caligraphic_the caligraphic_following caligraphic_observations: caligraphic_(1) caligraphic_Generally, caligraphic_IntellectReq caligraphic_achieves caligraphic_the caligraphic_best caligraphic_performance caligraphic_with caligraphic_different caligraphic_evaluation caligraphic_metrics caligraphic_in caligraphic_most caligraphic_cases, caligraphic_demonstrating caligraphic_the caligraphic_effectiveness caligraphic_of caligraphic_IntellectReq. caligraphic_(2) caligraphic_When caligraphic_the caligraphic_request caligraphic_frequency caligraphic_is caligraphic_small, caligraphic_the caligraphic_difference caligraphic_between caligraphic_IntellectReq caligraphic_and caligraphic_IntellectReq caligraphic_(w/o. caligraphic_DM) caligraphic_is caligraphic_not caligraphic_immediately caligraphic_apparent, caligraphic_as caligraphic_shown caligraphic_in caligraphic_Fig. caligraphic_(d). caligraphic_The caligraphic_difference caligraphic_becomes caligraphic_more caligraphic_noticeable caligraphic_when caligraphic_the caligraphic_Request caligraphic_Frequency caligraphic_increases caligraphic_within caligraphic_a caligraphic_certain caligraphic_range. caligraphic_In caligraphic_brief, caligraphic_the caligraphic_difference caligraphic_exhibits caligraphic_the caligraphic_traits caligraphic_of caligraphic_first caligraphic_getting caligraphic_smaller, caligraphic_then caligraphic_larger, caligraphic_and caligraphic_finally caligraphic_smaller.4.2.4subsubsection caligraphic_4.2.44.2.4§4.2.44.2.4Time caligraphic_and caligraphic_Space caligraphic_Cost.4.2.4Time caligraphic_and caligraphic_Space caligraphic_Cost.Most caligraphic_edges caligraphic_have caligraphic_limited caligraphic_storage caligraphic_space, caligraphic_so caligraphic_the caligraphic_on-edge caligraphic_model caligraphic_must caligraphic_be caligraphic_small caligraphic_and caligraphic_sufficient. caligraphic_The caligraphic_edge's caligraphic_computing caligraphic_power caligraphic_is caligraphic_rather caligraphic_limited, caligraphic_and caligraphic_the caligraphic_completion caligraphic_of caligraphic_the caligraphic_recommendation caligraphic_task caligraphic_on caligraphic_the caligraphic_edge caligraphic_requires caligraphic_lots caligraphic_of caligraphic_real-time caligraphic_processing, caligraphic_so caligraphic_the caligraphic_model caligraphic_deployed caligraphic_on caligraphic_the caligraphic_edge caligraphic_must caligraphic_be caligraphic_both caligraphic_simple caligraphic_and caligraphic_fast. caligraphic_Therefore, caligraphic_we caligraphic_analyze caligraphic_whether caligraphic_these caligraphic_methods caligraphic_are caligraphic_controllable caligraphic_and caligraphic_highly caligraphic_profitable caligraphic_based caligraphic_on caligraphic_the caligraphic_DUET caligraphic_framework, caligraphic_and caligraphic_additional caligraphic_time caligraphic_and caligraphic_space caligraphic_resource caligraphic_consumption caligraphic_under caligraphic_this caligraphic_framework caligraphic_is caligraphic_shown caligraphic_in caligraphic_Table caligraphic_.Table caligraphic_1Table caligraphic_11Table caligraphic_11Extra caligraphic_Time caligraphic_and caligraphic_Space caligraphic_Cost caligraphic_on caligraphic_CDs caligraphic_dataset.Table caligraphic_1Extra caligraphic_Time caligraphic_and caligraphic_Space caligraphic_Cost caligraphic_on caligraphic_CDs caligraphic_dataset.MethodControllableProfitableTime caligraphic_CostSpace caligraphic_Cost caligraphic_(Param.)LOF✗✓225s/11.3ms≈0OC-SVM✗✓160s/9.7ms≈0Random✓✗0s/0.8ms≈0IntellectReq✓✓11s/7.9ms≈5.06kIn caligraphic_the caligraphic_time caligraphic_consumption caligraphic_column, caligraphic_signal caligraphic_``/'' caligraphic_separates caligraphic_the caligraphic_time caligraphic_consumption caligraphic_of caligraphic_cloud caligraphic_preprocessing caligraphic_and caligraphic_edge caligraphic_inference. caligraphic_Cloud caligraphic_preprocessing caligraphic_means caligraphic_that caligraphic_the caligraphic_cloud caligraphic_server caligraphic_first caligraphic_calculates caligraphic_the caligraphic_MRS caligraphic_value caligraphic_based caligraphic_on caligraphic_recent caligraphic_user caligraphic_data caligraphic_and caligraphic_then caligraphic_determines caligraphic_the caligraphic_threshold caligraphic_based caligraphic_on caligraphic_the caligraphic_communication caligraphic_budget caligraphic_of caligraphic_the caligraphic_cloud caligraphic_server caligraphic_and caligraphic_sends caligraphic_it caligraphic_to caligraphic_the caligraphic_edge. caligraphic_Edge caligraphic_inference caligraphic_refers caligraphic_to caligraphic_the caligraphic_MRS caligraphic_calculated caligraphic_when caligraphic_the caligraphic_click caligraphic_sequence caligraphic_on caligraphic_the caligraphic_edge caligraphic_is caligraphic_updated. caligraphic_The caligraphic_experimental caligraphic_results caligraphic_show caligraphic_that: caligraphic_1) caligraphic_In caligraphic_terms caligraphic_of caligraphic_time caligraphic_consumption, caligraphic_both caligraphic_cloud caligraphic_preprocessing caligraphic_and caligraphic_edge caligraphic_inference caligraphic_are caligraphic_the caligraphic_fastest caligraphic_for caligraphic_random caligraphic_requests, caligraphic_followed caligraphic_by caligraphic_our caligraphic_IntellectReq. caligraphic_LOF caligraphic_and caligraphic_OC-SVM caligraphic_are caligraphic_the caligraphic_slowest. caligraphic_2) caligraphic_In caligraphic_terms caligraphic_of caligraphic_space caligraphic_consumption, caligraphic_random, caligraphic_LOF, caligraphic_and caligraphic_OC-SVM caligraphic_can caligraphic_all caligraphic_be caligraphic_regarded caligraphic_as caligraphic_requiring caligraphic_no caligraphic_additional caligraphic_space caligraphic_consumption. caligraphic_In caligraphic_contrast, caligraphic_our caligraphic_method caligraphic_requires caligraphic_the caligraphic_additional caligraphic_deployment caligraphic_of caligraphic_5.06k caligraphic_parameters caligraphic_on caligraphic_the caligraphic_edge. caligraphic_3) caligraphic_Random caligraphic_and caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_be caligraphic_realized caligraphic_in caligraphic_terms caligraphic_of caligraphic_controllability. caligraphic_It caligraphic_means caligraphic_that caligraphic_edge-cloud caligraphic_communication caligraphic_can caligraphic_be caligraphic_realized caligraphic_under caligraphic_the caligraphic_condition caligraphic_of caligraphic_an caligraphic_arbitrary caligraphic_communication caligraphic_budget, caligraphic_while caligraphic_LOF caligraphic_and caligraphic_OC-SVM caligraphic_cannot. caligraphic_4) caligraphic_In caligraphic_terms caligraphic_of caligraphic_high caligraphic_yield, caligraphic_LOF, caligraphic_OC-SVM, caligraphic_and caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_all caligraphic_be caligraphic_achieved, caligraphic_but caligraphic_random caligraphic_requests caligraphic_cannot. caligraphic_In caligraphic_general, caligraphic_our caligraphic_IntellectReq caligraphic_only caligraphic_requires caligraphic_minimal caligraphic_time caligraphic_consumption caligraphic_(does caligraphic_not caligraphic_affect caligraphic_real-time caligraphic_performance) caligraphic_and caligraphic_space caligraphic_consumption caligraphic_(easy caligraphic_to caligraphic_deploy caligraphic_for caligraphic_smart caligraphic_edges) caligraphic_and caligraphic_can caligraphic_take caligraphic_into caligraphic_account caligraphic_controllability caligraphic_and caligraphic_high caligraphic_profitability.5section caligraphic_55§55Conclusion5ConclusionIn caligraphic_our caligraphic_paper, caligraphic_we caligraphic_argue caligraphic_that caligraphic_under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_most caligraphic_communications caligraphic_requesting caligraphic_new caligraphic_parameters caligraphic_for caligraphic_the caligraphic_cloud-based caligraphic_recommendation caligraphic_system caligraphic_are caligraphic_unnecessary caligraphic_due caligraphic_to caligraphic_stable caligraphic_on-edge caligraphic_data caligraphic_distributions. caligraphic_We caligraphic_introduced caligraphic_IntellectReq, caligraphic_a caligraphic_low-resource caligraphic_solution caligraphic_for caligraphic_calculating caligraphic_request caligraphic_value caligraphic_and caligraphic_ensuring caligraphic_adaptive, caligraphic_high-revenue caligraphic_edge-cloud caligraphic_communication. caligraphic_IntellectReq caligraphic_employs caligraphic_a caligraphic_novel caligraphic_edge caligraphic_intelligence caligraphic_task caligraphic_to caligraphic_identify caligraphic_out-of-domain caligraphic_data caligraphic_and caligraphic_uses caligraphic_real-time caligraphic_user caligraphic_behavior caligraphic_mapping caligraphic_to caligraphic_a caligraphic_normal caligraphic_distribution, caligraphic_alongside caligraphic_multi-sampling caligraphic_outputs, caligraphic_to caligraphic_assess caligraphic_the caligraphic_edge caligraphic_model's caligraphic_adaptability caligraphic_to caligraphic_user caligraphic_actions. caligraphic_Our caligraphic_extensive caligraphic_tests caligraphic_across caligraphic_three caligraphic_public caligraphic_benchmarks caligraphic_confirm caligraphic_IntellectReq's caligraphic_efficiency caligraphic_and caligraphic_broad caligraphic_applicability, caligraphic_promoting caligraphic_a caligraphic_more caligraphic_effective caligraphic_edge-cloud caligraphic_collaborative caligraphic_recommendation caligraphic_approach.ACKNOWLEDGMENTThis caligraphic_work caligraphic_was caligraphic_supported caligraphic_by caligraphic_National caligraphic_Key caligraphic_R&D caligraphic_Program caligraphic_of caligraphic_China caligraphic_(No. caligraphic_2022ZD0119100), caligraphic_Scientific caligraphic_Research caligraphic_Fund caligraphic_of caligraphic_Zhejiang caligraphic_Provincial caligraphic_Education caligraphic_Department caligraphic_(Y202353679), caligraphic_National caligraphic_Natural caligraphic_Science caligraphic_Foundation caligraphic_of caligraphic_China caligraphic_(No. caligraphic_62376243, caligraphic_62037001, caligraphic_U20A20387), caligraphic_the caligraphic_StarryNight caligraphic_Science caligraphic_Fund caligraphic_of caligraphic_Zhejiang caligraphic_University caligraphic_Shanghai caligraphic_Institute caligraphic_for caligraphic_Advanced caligraphic_Study caligraphic_(SN-ZJU-SIAS-0010), caligraphic_Project caligraphic_by caligraphic_Shanghai caligraphic_AI caligraphic_Laboratory caligraphic_(P22KS00111) caligraphic_and caligraphic_Program caligraphic_of caligraphic_Zhejiang caligraphic_Province caligraphic_Science caligraphic_and caligraphic_Technology caligraphic_(2022C01044)References1(1) caligraphic_22000Breunig caligraphic_et caligraphic_al.Breunig, caligraphic_Kriegel, caligraphic_Ng, caligraphic_and caligraphic_SanderBreunig caligraphic_et caligraphic_al. caligraphic_(2000)ref:lof caligraphic_Markus caligraphic_M caligraphic_Breunig, caligraphic_Hans-Peter caligraphic_Kriegel, caligraphic_Raymond caligraphic_T caligraphic_Ng, caligraphic_and caligraphic_Jörg caligraphic_Sander. caligraphic_2000. caligraphic_LOF: caligraphic_identifying caligraphic_density-based caligraphic_local caligraphic_outliers. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_2000 caligraphic_ACM caligraphic_SIGMOD caligraphic_international caligraphic_conference caligraphic_on caligraphic_Management caligraphic_of caligraphic_data. caligraphic_93–104. caligraphic_32020Cai caligraphic_et caligraphic_al.Cai, caligraphic_Gan, caligraphic_Zhu, caligraphic_and caligraphic_HanCai caligraphic_et caligraphic_al. caligraphic_(2020)ref:finetuning caligraphic_Han caligraphic_Cai, caligraphic_Chuang caligraphic_Gan, caligraphic_Ligeng caligraphic_Zhu, caligraphic_and caligraphic_Song caligraphic_Han. caligraphic_2020. caligraphic_Tinytl: caligraphic_Reduce caligraphic_activations, caligraphic_not caligraphic_trainable caligraphic_parameters caligraphic_for caligraphic_efficient caligraphic_on-device caligraphic_learning. caligraphic_(2020). caligraphic_42023Cao caligraphic_et caligraphic_al.Cao, caligraphic_Zheng, caligraphic_Hassanzadeh, caligraphic_Lamba, caligraphic_Liu, caligraphic_and caligraphic_LiuCao caligraphic_et caligraphic_al. caligraphic_(2023)cao2023_10.1145/3604237.3626868 caligraphic_Defu caligraphic_Cao, caligraphic_Yixiang caligraphic_Zheng, caligraphic_Parisa caligraphic_Hassanzadeh, caligraphic_Simran caligraphic_Lamba, caligraphic_Xiaomo caligraphic_Liu, caligraphic_and caligraphic_Yan caligraphic_Liu. caligraphic_2023. caligraphic_Large caligraphic_Scale caligraphic_Financial caligraphic_Time caligraphic_Series caligraphic_Forecasting caligraphic_with caligraphic_Multi-faceted caligraphic_Model. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_Fourth caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_AI caligraphic_in caligraphic_Finance caligraphic_(<conf-loc>, caligraphic_<city>Brooklyn</city>, caligraphic_<state>NY</state>, caligraphic_<country>USA</country>, caligraphic_</conf-loc>) caligraphic_(ICAIF caligraphic_'23). caligraphic_Association caligraphic_for caligraphic_Computing caligraphic_Machinery, caligraphic_New caligraphic_York, caligraphic_NY, caligraphic_USA, caligraphic_472–480. caligraphic_https://doi.org/10.1145/3604237.3626868 caligraphic_52021Chang caligraphic_et caligraphic_al.Chang, caligraphic_Gao, caligraphic_Zheng, caligraphic_Hui, caligraphic_Niu, caligraphic_Song, caligraphic_Jin, caligraphic_and caligraphic_LiChang caligraphic_et caligraphic_al. caligraphic_(2021)ref:surge caligraphic_Jianxin caligraphic_Chang, caligraphic_Chen caligraphic_Gao, caligraphic_Yu caligraphic_Zheng, caligraphic_Yiqun caligraphic_Hui, caligraphic_Yanan caligraphic_Niu, caligraphic_Yang caligraphic_Song, caligraphic_Depeng caligraphic_Jin, caligraphic_and caligraphic_Yong caligraphic_Li. caligraphic_2021. caligraphic_Sequential caligraphic_recommendation caligraphic_with caligraphic_graph caligraphic_neural caligraphic_networks. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_44th caligraphic_International caligraphic_ACM caligraphic_SIGIR caligraphic_Conference caligraphic_on caligraphic_Research caligraphic_and caligraphic_Development caligraphic_in caligraphic_Information caligraphic_Retrieval. caligraphic_378–387. caligraphic_62021Chen caligraphic_and caligraphic_WangChen caligraphic_and caligraphic_WangChen caligraphic_and caligraphic_Wang caligraphic_(2021)chen2021multi caligraphic_Zhengyu caligraphic_Chen caligraphic_and caligraphic_Donglin caligraphic_Wang. caligraphic_2021. caligraphic_Multi-Initialization caligraphic_Meta-Learning caligraphic_with caligraphic_Domain caligraphic_Adaptation. caligraphic_In caligraphic_ICASSP caligraphic_2021-2021 caligraphic_IEEE caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Acoustics, caligraphic_Speech caligraphic_and caligraphic_Signal caligraphic_Processing caligraphic_(ICASSP). caligraphic_IEEE, caligraphic_1390–1394. caligraphic_72022Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_and caligraphic_KuangChen caligraphic_et caligraphic_al. caligraphic_(2022)chen2022ba caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2022. caligraphic_BA-GNN: caligraphic_On caligraphic_Learning caligraphic_Bias-Aware caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_In caligraphic_2022 caligraphic_IEEE caligraphic_38th caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Data caligraphic_Engineering caligraphic_(ICDE). caligraphic_IEEE, caligraphic_3012–3024. caligraphic_82023Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_Kuang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Yang, caligraphic_Lu, caligraphic_Yang, caligraphic_and caligraphic_WuChen caligraphic_et caligraphic_al. caligraphic_(2023)chen2023learning_arxiv caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_Kun caligraphic_Kuang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Min caligraphic_Zhang, caligraphic_Jinluan caligraphic_Yang, caligraphic_Chengqiang caligraphic_Lu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023. caligraphic_Learning caligraphic_to caligraphic_Reweight caligraphic_for caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2312.12475 caligraphic_(2023). caligraphic_92024Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xiao, caligraphic_Kuang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Yang, caligraphic_Lu, caligraphic_Yang, caligraphic_and caligraphic_WuChen caligraphic_et caligraphic_al. caligraphic_(2024)chen2023learning caligraphic_Zhengyu caligraphic_Chen, caligraphic_Teng caligraphic_Xiao, caligraphic_Kun caligraphic_Kuang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Min caligraphic_Zhang, caligraphic_Jinluan caligraphic_Yang, caligraphic_Chengqiang caligraphic_Lu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2024. caligraphic_Learning caligraphic_to caligraphic_Reweight caligraphic_for caligraphic_Generalizable caligraphic_Graph caligraphic_Neural caligraphic_Network. caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence caligraphic_(2024). caligraphic_102021Chen caligraphic_et caligraphic_al.Chen, caligraphic_Xu, caligraphic_and caligraphic_WangChen caligraphic_et caligraphic_al. caligraphic_(2021)chen2021deep caligraphic_Zhengyu caligraphic_Chen, caligraphic_Ziqing caligraphic_Xu, caligraphic_and caligraphic_Donglin caligraphic_Wang. caligraphic_2021. caligraphic_Deep caligraphic_transfer caligraphic_tensor caligraphic_decomposition caligraphic_with caligraphic_orthogonal caligraphic_constraint caligraphic_for caligraphic_recommender caligraphic_systems. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_Conference caligraphic_on caligraphic_Artificial caligraphic_Intelligence, caligraphic_Vol. caligraphic_35. caligraphic_4010–4018. caligraphic_112017Ha caligraphic_et caligraphic_al.Ha, caligraphic_Dai, caligraphic_and caligraphic_LeHa caligraphic_et caligraphic_al. caligraphic_(2017)ref:hypernetwork_pioneering1 caligraphic_David caligraphic_Ha, caligraphic_Andrew caligraphic_Dai, caligraphic_and caligraphic_Quoc caligraphic_V caligraphic_Le. caligraphic_2017. caligraphic_Hypernetworks. caligraphic_(2017). caligraphic_122016Hidasi caligraphic_et caligraphic_al.Hidasi, caligraphic_Karatzoglou, caligraphic_Baltrunas, caligraphic_and caligraphic_TikkHidasi caligraphic_et caligraphic_al. caligraphic_(2016)ref:gru4rec caligraphic_Balázs caligraphic_Hidasi, caligraphic_Alexandros caligraphic_Karatzoglou, caligraphic_Linas caligraphic_Baltrunas, caligraphic_and caligraphic_Domonkos caligraphic_Tikk. caligraphic_2016. caligraphic_Session-based caligraphic_recommendations caligraphic_with caligraphic_recurrent caligraphic_neural caligraphic_networks. caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations caligraphic_2016 caligraphic_(2016). caligraphic_132023Huang caligraphic_et caligraphic_al.Huang, caligraphic_Huang, caligraphic_Yang, caligraphic_Ren, caligraphic_Liu, caligraphic_Li, caligraphic_Ye, caligraphic_Liu, caligraphic_Yin, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2023)huang2023make caligraphic_Rongjie caligraphic_Huang, caligraphic_Jiawei caligraphic_Huang, caligraphic_Dongchao caligraphic_Yang, caligraphic_Yi caligraphic_Ren, caligraphic_Luping caligraphic_Liu, caligraphic_Mingze caligraphic_Li, caligraphic_Zhenhui caligraphic_Ye, caligraphic_Jinglin caligraphic_Liu, caligraphic_Xiang caligraphic_Yin, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2023. caligraphic_Make-an-audio: caligraphic_Text-to-audio caligraphic_generation caligraphic_with caligraphic_prompt-enhanced caligraphic_diffusion caligraphic_models. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2301.12661 caligraphic_(2023). caligraphic_142022aHuang caligraphic_et caligraphic_al.Huang, caligraphic_Lam, caligraphic_Wang, caligraphic_Su, caligraphic_Yu, caligraphic_Ren, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2022a)DBLP:conf/ijcai/HuangL0S00Z22 caligraphic_Rongjie caligraphic_Huang, caligraphic_Max caligraphic_W. caligraphic_Y. caligraphic_Lam, caligraphic_Jun caligraphic_Wang, caligraphic_Dan caligraphic_Su, caligraphic_Dong caligraphic_Yu, caligraphic_Yi caligraphic_Ren, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2022a. caligraphic_FastDiff: caligraphic_A caligraphic_Fast caligraphic_Conditional caligraphic_Diffusion caligraphic_Model caligraphic_for caligraphic_High-Quality caligraphic_Speech caligraphic_Synthesis. caligraphic_In caligraphic_IJCAI. caligraphic_ijcai.org, caligraphic_4157–4163. caligraphic_152022bHuang caligraphic_et caligraphic_al.Huang, caligraphic_Ren, caligraphic_Liu, caligraphic_Cui, caligraphic_and caligraphic_ZhaoHuang caligraphic_et caligraphic_al. caligraphic_(2022b)huang2022generspeech caligraphic_Rongjie caligraphic_Huang, caligraphic_Yi caligraphic_Ren, caligraphic_Jinglin caligraphic_Liu, caligraphic_Chenye caligraphic_Cui, caligraphic_and caligraphic_Zhou caligraphic_Zhao. caligraphic_2022b. caligraphic_Generspeech: caligraphic_Towards caligraphic_style caligraphic_transfer caligraphic_for caligraphic_generalizable caligraphic_out-of-domain caligraphic_text-to-speech. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_35 caligraphic_(2022), caligraphic_10970–10983. caligraphic_162023aJi caligraphic_et caligraphic_al.Ji, caligraphic_Liang, caligraphic_Liao, caligraphic_Fei, caligraphic_and caligraphic_FengJi caligraphic_et caligraphic_al. caligraphic_(2023a)ji2023partial caligraphic_Wei caligraphic_Ji, caligraphic_Renjie caligraphic_Liang, caligraphic_Lizi caligraphic_Liao, caligraphic_Hao caligraphic_Fei, caligraphic_and caligraphic_Fuli caligraphic_Feng. caligraphic_2023a. caligraphic_Partial caligraphic_Annotation-based caligraphic_Video caligraphic_Moment caligraphic_Retrieval caligraphic_via caligraphic_Iterative caligraphic_Learning. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_Multimedia. caligraphic_172023bJi caligraphic_et caligraphic_al.Ji, caligraphic_Liu, caligraphic_Zhang, caligraphic_Wei, caligraphic_and caligraphic_WangJi caligraphic_et caligraphic_al. caligraphic_(2023b)ji2023online caligraphic_Wei caligraphic_Ji, caligraphic_Xiangyan caligraphic_Liu, caligraphic_An caligraphic_Zhang, caligraphic_Yinwei caligraphic_Wei, caligraphic_and caligraphic_Xiang caligraphic_Wang. caligraphic_2023b. caligraphic_Online caligraphic_Distillation-enhanced caligraphic_Multi-modal caligraphic_Transformer caligraphic_for caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_Multimedia. caligraphic_182018Kang caligraphic_and caligraphic_McAuleyKang caligraphic_and caligraphic_McAuleyKang caligraphic_and caligraphic_McAuley caligraphic_(2018)ref:sasrec caligraphic_Wang-Cheng caligraphic_Kang caligraphic_and caligraphic_Julian caligraphic_McAuley. caligraphic_2018. caligraphic_Self-attentive caligraphic_sequential caligraphic_recommendation. caligraphic_In caligraphic_2018 caligraphic_IEEE caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Data caligraphic_Mining caligraphic_(ICDM). caligraphic_IEEE, caligraphic_197–206. caligraphic_192021Latifi caligraphic_et caligraphic_al.Latifi, caligraphic_Mauro, caligraphic_and caligraphic_JannachLatifi caligraphic_et caligraphic_al. caligraphic_(2021)latifi2021session caligraphic_Sara caligraphic_Latifi, caligraphic_Noemi caligraphic_Mauro, caligraphic_and caligraphic_Dietmar caligraphic_Jannach. caligraphic_2021. caligraphic_Session-aware caligraphic_recommendation: caligraphic_A caligraphic_surprising caligraphic_quest caligraphic_for caligraphic_the caligraphic_state-of-the-art. caligraphic_Information caligraphic_Sciences caligraphic_573 caligraphic_(2021), caligraphic_291–315. caligraphic_202023eLi caligraphic_et caligraphic_al.Li, caligraphic_Xiao, caligraphic_Zheng, caligraphic_Wu, caligraphic_and caligraphic_CuiLi caligraphic_et caligraphic_al. caligraphic_(2023e)li2023propensity caligraphic_Haoxuan caligraphic_Li, caligraphic_Yanghao caligraphic_Xiao, caligraphic_Chunyuan caligraphic_Zheng, caligraphic_Peng caligraphic_Wu, caligraphic_and caligraphic_Peng caligraphic_Cui. caligraphic_2023e. caligraphic_Propensity caligraphic_matters: caligraphic_Measuring caligraphic_and caligraphic_enhancing caligraphic_balancing caligraphic_for caligraphic_recommendation. caligraphic_In caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Machine caligraphic_Learning. caligraphic_PMLR, caligraphic_20182–20194. caligraphic_212024Li caligraphic_et caligraphic_al.Li, caligraphic_Xiao, caligraphic_Zheng, caligraphic_Wu, caligraphic_Geng, caligraphic_Chen, caligraphic_and caligraphic_CuiLi caligraphic_et caligraphic_al. caligraphic_(2024)li2024kernel caligraphic_Haoxuan caligraphic_Li, caligraphic_Yanghao caligraphic_Xiao, caligraphic_Chunyuan caligraphic_Zheng, caligraphic_Peng caligraphic_Wu, caligraphic_Zhi caligraphic_Geng, caligraphic_Xu caligraphic_Chen, caligraphic_and caligraphic_Peng caligraphic_Cui. caligraphic_2024. caligraphic_Debiased caligraphic_Collaborative caligraphic_Filtering caligraphic_with caligraphic_Kernel-based caligraphic_Causal caligraphic_Balancing. caligraphic_In caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations. caligraphic_222022aLi caligraphic_et caligraphic_al.Li, caligraphic_He, caligraphic_Wei, caligraphic_Qian, caligraphic_Zhu, caligraphic_Xie, caligraphic_Zhuang, caligraphic_Tian, caligraphic_and caligraphic_TangLi caligraphic_et caligraphic_al. caligraphic_(2022a)li2022fine caligraphic_Juncheng caligraphic_Li, caligraphic_Xin caligraphic_He, caligraphic_Longhui caligraphic_Wei, caligraphic_Long caligraphic_Qian, caligraphic_Linchao caligraphic_Zhu, caligraphic_Lingxi caligraphic_Xie, caligraphic_Yueting caligraphic_Zhuang, caligraphic_Qi caligraphic_Tian, caligraphic_and caligraphic_Siliang caligraphic_Tang. caligraphic_2022a. caligraphic_Fine-grained caligraphic_semantically caligraphic_aligned caligraphic_vision-language caligraphic_pre-training. caligraphic_Advances caligraphic_in caligraphic_neural caligraphic_information caligraphic_processing caligraphic_systems caligraphic_35 caligraphic_(2022), caligraphic_7290–7303. caligraphic_232023aLi caligraphic_et caligraphic_al.Li, caligraphic_Pan, caligraphic_Ge, caligraphic_Gao, caligraphic_Zhang, caligraphic_Ji, caligraphic_Zhang, caligraphic_Chua, caligraphic_Tang, caligraphic_and caligraphic_ZhuangLi caligraphic_et caligraphic_al. caligraphic_(2023a)li2023finetuning caligraphic_Juncheng caligraphic_Li, caligraphic_Kaihang caligraphic_Pan, caligraphic_Zhiqi caligraphic_Ge, caligraphic_Minghe caligraphic_Gao, caligraphic_Hanwang caligraphic_Zhang, caligraphic_Wei caligraphic_Ji, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_Siliang caligraphic_Tang, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2023a. caligraphic_Fine-tuning caligraphic_Multimodal caligraphic_LLMs caligraphic_to caligraphic_Follow caligraphic_Zero-shot caligraphic_Demonstrative caligraphic_Instructions. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2308.04152 caligraphic_(2023). caligraphic_242023bLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Qin, caligraphic_Ji, caligraphic_and caligraphic_LiangLi caligraphic_et caligraphic_al. caligraphic_(2023b)lili_10.1145/3581783.3611847 caligraphic_Li caligraphic_Li, caligraphic_Chenwei caligraphic_Wang, caligraphic_You caligraphic_Qin, caligraphic_Wei caligraphic_Ji, caligraphic_and caligraphic_Renjie caligraphic_Liang. caligraphic_2023b. caligraphic_Biased-Predicate caligraphic_Annotation caligraphic_Identification caligraphic_via caligraphic_Unbiased caligraphic_Visual caligraphic_Predicate caligraphic_Representation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31st caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia caligraphic_(<conf-loc>, caligraphic_<city>Ottawa caligraphic_ON</city>, caligraphic_<country>Canada</country>, caligraphic_</conf-loc>) caligraphic_(MM caligraphic_'23). caligraphic_Association caligraphic_for caligraphic_Computing caligraphic_Machinery, caligraphic_New caligraphic_York, caligraphic_NY, caligraphic_USA, caligraphic_4410–4420. caligraphic_https://doi.org/10.1145/3581783.3611847 caligraphic_252023dLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Zhang, caligraphic_Miao, caligraphic_Zhao, caligraphic_Zhang, caligraphic_Ji, caligraphic_and caligraphic_WuLi caligraphic_et caligraphic_al. caligraphic_(2023d)li2023winner caligraphic_Mengze caligraphic_Li, caligraphic_Han caligraphic_Wang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Zhou caligraphic_Zhao, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Wei caligraphic_Ji, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023d. caligraphic_Winner: caligraphic_Weakly-supervised caligraphic_hierarchical caligraphic_decomposition caligraphic_and caligraphic_alignment caligraphic_for caligraphic_spatio-temporal caligraphic_video caligraphic_grounding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_23090–23099. caligraphic_262023cLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Xu, caligraphic_Han, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Miao, caligraphic_Zhang, caligraphic_Pu, caligraphic_and caligraphic_WuLi caligraphic_et caligraphic_al. caligraphic_(2023c)li2023multi caligraphic_Mengze caligraphic_Li, caligraphic_Tianbao caligraphic_Wang, caligraphic_Jiahe caligraphic_Xu, caligraphic_Kairong caligraphic_Han, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Shiliang caligraphic_Pu, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023c. caligraphic_Multi-modal caligraphic_Action caligraphic_Chain caligraphic_Abductive caligraphic_Reasoning. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_61st caligraphic_Annual caligraphic_Meeting caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics caligraphic_(Volume caligraphic_1: caligraphic_Long caligraphic_Papers). caligraphic_4617–4628. caligraphic_272022bLi caligraphic_et caligraphic_al.Li, caligraphic_Wang, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Miao, caligraphic_Zhang, caligraphic_Tan, caligraphic_Wang, caligraphic_Wang, caligraphic_et caligraphic_al.Li caligraphic_et caligraphic_al. caligraphic_(2022b)li2022end caligraphic_Mengze caligraphic_Li, caligraphic_Tianbao caligraphic_Wang, caligraphic_Haoyu caligraphic_Zhang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jiaxu caligraphic_Miao, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Wenming caligraphic_Tan, caligraphic_Jin caligraphic_Wang, caligraphic_Peng caligraphic_Wang, caligraphic_et caligraphic_al. caligraphic_2022b. caligraphic_End-to-End caligraphic_Modeling caligraphic_via caligraphic_Information caligraphic_Tree caligraphic_for caligraphic_One-Shot caligraphic_Natural caligraphic_Language caligraphic_Spatial caligraphic_Video caligraphic_Grounding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_60th caligraphic_Annual caligraphic_Meeting caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics caligraphic_(Volume caligraphic_1: caligraphic_Long caligraphic_Papers). caligraphic_8707–8717. caligraphic_282023Lin caligraphic_et caligraphic_al.Lin, caligraphic_Xu, caligraphic_Wang, caligraphic_Zhang, caligraphic_and caligraphic_FengLin caligraphic_et caligraphic_al. caligraphic_(2023)lin2023mitigating caligraphic_Xin-Yu caligraphic_Lin, caligraphic_Yi-Yan caligraphic_Xu, caligraphic_Wen-Jie caligraphic_Wang, caligraphic_Yang caligraphic_Zhang, caligraphic_and caligraphic_Fu-Li caligraphic_Feng. caligraphic_2023. caligraphic_Mitigating caligraphic_Spurious caligraphic_Correlations caligraphic_for caligraphic_Self-supervised caligraphic_Recommendation. caligraphic_Machine caligraphic_Intelligence caligraphic_Research caligraphic_20, caligraphic_2 caligraphic_(2023), caligraphic_263–275. caligraphic_292022Lv caligraphic_et caligraphic_al.Lv, caligraphic_Wang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_Yang, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2022)lv2022personalizing caligraphic_Zheqi caligraphic_Lv, caligraphic_Feng caligraphic_Wang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2022. caligraphic_Personalizing caligraphic_Intervened caligraphic_Network caligraphic_for caligraphic_Long-tailed caligraphic_Sequential caligraphic_User caligraphic_Behavior caligraphic_Modeling. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2208.09130 caligraphic_(2022). caligraphic_302023aLv caligraphic_et caligraphic_al.Lv, caligraphic_Wang, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2023a)lv2023parameters caligraphic_Zheqi caligraphic_Lv, caligraphic_Feng caligraphic_Wang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023a. caligraphic_Parameters caligraphic_Efficient caligraphic_Fine-Tuning caligraphic_for caligraphic_Long-Tailed caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_CAAI caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Artificial caligraphic_Intelligence. caligraphic_Springer, caligraphic_442–459. caligraphic_312023bLv caligraphic_et caligraphic_al.Lv, caligraphic_Zhang, caligraphic_Zhang, caligraphic_Kuang, caligraphic_Wang, caligraphic_Wang, caligraphic_Chen, caligraphic_Shen, caligraphic_Yang, caligraphic_Ooi, caligraphic_and caligraphic_WuLv caligraphic_et caligraphic_al. caligraphic_(2023b)ref:duet caligraphic_Zheqi caligraphic_Lv, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Feng caligraphic_Wang, caligraphic_Yongwei caligraphic_Wang, caligraphic_Zhengyu caligraphic_Chen, caligraphic_Tao caligraphic_Shen, caligraphic_Hongxia caligraphic_Yang, caligraphic_Beng caligraphic_Chin caligraphic_Ooi, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023b. caligraphic_DUET: caligraphic_A caligraphic_Tuning-Free caligraphic_Device-Cloud caligraphic_Collaborative caligraphic_Parameters caligraphic_Generation caligraphic_Framework caligraphic_for caligraphic_Efficient caligraphic_Device caligraphic_Model caligraphic_Generalization. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_ACM caligraphic_Web caligraphic_Conference caligraphic_2023. caligraphic_322021Marfoq caligraphic_et caligraphic_al.Marfoq, caligraphic_Neglia, caligraphic_Bellet, caligraphic_Kameni, caligraphic_and caligraphic_VidalMarfoq caligraphic_et caligraphic_al. caligraphic_(2021)ref:federated_multi_task2 caligraphic_Othmane caligraphic_Marfoq, caligraphic_Giovanni caligraphic_Neglia, caligraphic_Aurélien caligraphic_Bellet, caligraphic_Laetitia caligraphic_Kameni, caligraphic_and caligraphic_Richard caligraphic_Vidal. caligraphic_2021. caligraphic_Federated caligraphic_multi-task caligraphic_learning caligraphic_under caligraphic_a caligraphic_mixture caligraphic_of caligraphic_distributions. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_34 caligraphic_(2021), caligraphic_15434–15447. caligraphic_332017McMahan caligraphic_et caligraphic_al.McMahan, caligraphic_Moore, caligraphic_Ramage, caligraphic_Hampson, caligraphic_and caligraphic_y caligraphic_ArcasMcMahan caligraphic_et caligraphic_al. caligraphic_(2017)ref:federated_fedavg caligraphic_Brendan caligraphic_McMahan, caligraphic_Eider caligraphic_Moore, caligraphic_Daniel caligraphic_Ramage, caligraphic_Seth caligraphic_Hampson, caligraphic_and caligraphic_Blaise caligraphic_Aguera caligraphic_y caligraphic_Arcas. caligraphic_2017. caligraphic_Communication-efficient caligraphic_learning caligraphic_of caligraphic_deep caligraphic_networks caligraphic_from caligraphic_decentralized caligraphic_data. caligraphic_In caligraphic_Artificial caligraphic_intelligence caligraphic_and caligraphic_statistics. caligraphic_PMLR, caligraphic_1273–1282. caligraphic_342021Mills caligraphic_et caligraphic_al.Mills, caligraphic_Hu, caligraphic_and caligraphic_MinMills caligraphic_et caligraphic_al. caligraphic_(2021)ref:federated_multi_task caligraphic_Jed caligraphic_Mills, caligraphic_Jia caligraphic_Hu, caligraphic_and caligraphic_Geyong caligraphic_Min. caligraphic_2021. caligraphic_Multi-task caligraphic_federated caligraphic_learning caligraphic_for caligraphic_personalised caligraphic_deep caligraphic_neural caligraphic_networks caligraphic_in caligraphic_edge caligraphic_computing. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Parallel caligraphic_and caligraphic_Distributed caligraphic_Systems caligraphic_33, caligraphic_3 caligraphic_(2021), caligraphic_630–641. caligraphic_352022Qian caligraphic_et caligraphic_al.Qian, caligraphic_Xu, caligraphic_Lv, caligraphic_Zhang, caligraphic_Jiang, caligraphic_Liu, caligraphic_Zeng, caligraphic_Chua, caligraphic_and caligraphic_WuQian caligraphic_et caligraphic_al. caligraphic_(2022)zhangsyDBLP:conf/kdd/QianXLZJLZC022 caligraphic_Xufeng caligraphic_Qian, caligraphic_Yue caligraphic_Xu, caligraphic_Fuyu caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Ziwen caligraphic_Jiang, caligraphic_Qingwen caligraphic_Liu, caligraphic_Xiaoyi caligraphic_Zeng, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2022. caligraphic_Intelligent caligraphic_Request caligraphic_Strategy caligraphic_Design caligraphic_in caligraphic_Recommender caligraphic_System. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining. caligraphic_ACM, caligraphic_3772–3782. caligraphic_362020Qin caligraphic_et caligraphic_al.Qin, caligraphic_Lv, caligraphic_Wang, caligraphic_Hu, caligraphic_and caligraphic_WuQin caligraphic_et caligraphic_al. caligraphic_(2020)qin2020health caligraphic_Fang-Yu caligraphic_Qin, caligraphic_Zhe-Qi caligraphic_Lv, caligraphic_Dan-Ni caligraphic_Wang, caligraphic_Bo caligraphic_Hu, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2020. caligraphic_Health caligraphic_status caligraphic_prediction caligraphic_for caligraphic_the caligraphic_elderly caligraphic_based caligraphic_on caligraphic_machine caligraphic_learning. caligraphic_Archives caligraphic_of caligraphic_gerontology caligraphic_and caligraphic_geriatrics caligraphic_90 caligraphic_(2020), caligraphic_104121. caligraphic_372010Rendle caligraphic_et caligraphic_al.Rendle, caligraphic_Freudenthaler, caligraphic_and caligraphic_Schmidt-ThiemeRendle caligraphic_et caligraphic_al. caligraphic_(2010)ref:fpmc caligraphic_Steffen caligraphic_Rendle, caligraphic_Christoph caligraphic_Freudenthaler, caligraphic_and caligraphic_Lars caligraphic_Schmidt-Thieme. caligraphic_2010. caligraphic_Factorizing caligraphic_personalized caligraphic_Markov caligraphic_chains caligraphic_for caligraphic_next-basket caligraphic_recommendation. caligraphic_the caligraphic_web caligraphic_conference caligraphic_(2010). caligraphic_382019Sanh caligraphic_et caligraphic_al.Sanh, caligraphic_Debut, caligraphic_Chaumond, caligraphic_and caligraphic_WolfSanh caligraphic_et caligraphic_al. caligraphic_(2019)ref:disitll caligraphic_Victor caligraphic_Sanh, caligraphic_Lysandre caligraphic_Debut, caligraphic_Julien caligraphic_Chaumond, caligraphic_and caligraphic_Thomas caligraphic_Wolf. caligraphic_2019. caligraphic_DistilBERT, caligraphic_a caligraphic_distilled caligraphic_version caligraphic_of caligraphic_BERT: caligraphic_smaller, caligraphic_faster, caligraphic_cheaper caligraphic_and caligraphic_lighter. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:1910.01108 caligraphic_(2019). caligraphic_392023aSu caligraphic_et caligraphic_al.Su, caligraphic_Chen, caligraphic_Lin, caligraphic_Li, caligraphic_Liu, caligraphic_and caligraphic_ZhengSu caligraphic_et caligraphic_al. caligraphic_(2023a)su2023personalized caligraphic_Jiajie caligraphic_Su, caligraphic_Chaochao caligraphic_Chen, caligraphic_Zibin caligraphic_Lin, caligraphic_Xi caligraphic_Li, caligraphic_Weiming caligraphic_Liu, caligraphic_and caligraphic_Xiaolin caligraphic_Zheng. caligraphic_2023a. caligraphic_Personalized caligraphic_Behavior-Aware caligraphic_Transformer caligraphic_for caligraphic_Multi-Behavior caligraphic_Sequential caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_31st caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia. caligraphic_6321–6331. caligraphic_402023bSu caligraphic_et caligraphic_al.Su, caligraphic_Chen, caligraphic_Liu, caligraphic_Wu, caligraphic_Zheng, caligraphic_and caligraphic_LyuSu caligraphic_et caligraphic_al. caligraphic_(2023b)su2023enhancing caligraphic_Jiajie caligraphic_Su, caligraphic_Chaochao caligraphic_Chen, caligraphic_Weiming caligraphic_Liu, caligraphic_Fei caligraphic_Wu, caligraphic_Xiaolin caligraphic_Zheng, caligraphic_and caligraphic_Haoming caligraphic_Lyu. caligraphic_2023b. caligraphic_Enhancing caligraphic_Hierarchy-Aware caligraphic_Graph caligraphic_Networks caligraphic_with caligraphic_Deep caligraphic_Dual caligraphic_Clustering caligraphic_for caligraphic_Session-based caligraphic_Recommendation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_ACM caligraphic_Web caligraphic_Conference caligraphic_2023. caligraphic_165–176. caligraphic_412019Sun caligraphic_et caligraphic_al.Sun, caligraphic_Liu, caligraphic_Wu, caligraphic_Pei, caligraphic_Lin, caligraphic_Ou, caligraphic_and caligraphic_JiangSun caligraphic_et caligraphic_al. caligraphic_(2019)ref:bert4rec caligraphic_Fei caligraphic_Sun, caligraphic_Jun caligraphic_Liu, caligraphic_Jian caligraphic_Wu, caligraphic_Changhua caligraphic_Pei, caligraphic_Xiao caligraphic_Lin, caligraphic_Wenwu caligraphic_Ou, caligraphic_and caligraphic_Peng caligraphic_Jiang. caligraphic_2019. caligraphic_BERT4Rec: caligraphic_Sequential caligraphic_recommendation caligraphic_with caligraphic_bidirectional caligraphic_encoder caligraphic_representations caligraphic_from caligraphic_transformer. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_28th caligraphic_ACM caligraphic_international caligraphic_conference caligraphic_on caligraphic_information caligraphic_and caligraphic_knowledge caligraphic_management. caligraphic_1441–1450. caligraphic_422024aTang caligraphic_et caligraphic_al.Tang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Wu, caligraphic_and caligraphic_KuangTang caligraphic_et caligraphic_al. caligraphic_(2024a)tang2024modelgpt caligraphic_Zihao caligraphic_Tang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2024a. caligraphic_ModelGPT: caligraphic_Unleashing caligraphic_LLM's caligraphic_Capabilities caligraphic_for caligraphic_Tailored caligraphic_Model caligraphic_Generation. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2402.12408 caligraphic_(2024). caligraphic_432024bTang caligraphic_et caligraphic_al.Tang, caligraphic_Lv, caligraphic_Zhang, caligraphic_Zhou, caligraphic_Duan, caligraphic_Kuang, caligraphic_and caligraphic_WuTang caligraphic_et caligraphic_al. caligraphic_(2024b)tang2024oodkd caligraphic_Zihao caligraphic_Tang, caligraphic_Zheqi caligraphic_Lv, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Yifan caligraphic_Zhou, caligraphic_Xinyu caligraphic_Duan, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2024b. caligraphic_AuG-KD: caligraphic_Anchor-Based caligraphic_Mixup caligraphic_Generation caligraphic_for caligraphic_Out-of-Domain caligraphic_Knowledge caligraphic_Distillation. caligraphic_In caligraphic_12th caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations, caligraphic_ICLR caligraphic_2024, caligraphic_Vienna caligraphic_Austria, caligraphic_May caligraphic_7-11, caligraphic_2024. caligraphic_OpenReview.net. caligraphic_https://openreview.net/forum?id=fcqWJ8JgMR caligraphic_442002TaxTaxTax caligraphic_(2002)ref:ocsvm caligraphic_David caligraphic_Martinus caligraphic_Johannes caligraphic_Tax. caligraphic_2002. caligraphic_One-class caligraphic_classification: caligraphic_Concept caligraphic_learning caligraphic_in caligraphic_the caligraphic_absence caligraphic_of caligraphic_counter-examples. caligraphic_(2002). caligraphic_452023Tong caligraphic_et caligraphic_al.Tong, caligraphic_Yuan, caligraphic_Zhang, caligraphic_Zhu, caligraphic_Zhang, caligraphic_Wu, caligraphic_and caligraphic_KuangTong caligraphic_et caligraphic_al. caligraphic_(2023)DBLP:conf/kdd/TongYZZZWK23 caligraphic_Yunze caligraphic_Tong, caligraphic_Junkun caligraphic_Yuan, caligraphic_Min caligraphic_Zhang, caligraphic_Didi caligraphic_Zhu, caligraphic_Keli caligraphic_Zhang, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023. caligraphic_Quantitatively caligraphic_Measuring caligraphic_and caligraphic_Contrastively caligraphic_Exploring caligraphic_Heterogeneity caligraphic_for caligraphic_Domain caligraphic_Generalization. caligraphic_In caligraphic_KDD. caligraphic_ACM, caligraphic_2189–2200. caligraphic_462017Wang caligraphic_et caligraphic_al.Wang, caligraphic_Cui, caligraphic_Wang, caligraphic_Pei, caligraphic_Zhu, caligraphic_and caligraphic_YangWang caligraphic_et caligraphic_al. caligraphic_(2017)wang2017community caligraphic_Xiao caligraphic_Wang, caligraphic_Peng caligraphic_Cui, caligraphic_Jing caligraphic_Wang, caligraphic_Jian caligraphic_Pei, caligraphic_Wenwu caligraphic_Zhu, caligraphic_and caligraphic_Shiqiang caligraphic_Yang. caligraphic_2017. caligraphic_Community caligraphic_preserving caligraphic_network caligraphic_embedding. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence, caligraphic_Vol. caligraphic_31. caligraphic_472019Wu caligraphic_et caligraphic_al.Wu, caligraphic_Tang, caligraphic_Zhu, caligraphic_Wang, caligraphic_Xie, caligraphic_and caligraphic_TanWu caligraphic_et caligraphic_al. caligraphic_(2019)ref:srgnn caligraphic_Shu caligraphic_Wu, caligraphic_Yuyuan caligraphic_Tang, caligraphic_Yanqiao caligraphic_Zhu, caligraphic_Liang caligraphic_Wang, caligraphic_Xing caligraphic_Xie, caligraphic_and caligraphic_Tieniu caligraphic_Tan. caligraphic_2019. caligraphic_Session-based caligraphic_recommendation caligraphic_with caligraphic_graph caligraphic_neural caligraphic_networks. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_AAAI caligraphic_conference caligraphic_on caligraphic_artificial caligraphic_intelligence, caligraphic_Vol. caligraphic_33. caligraphic_346–353. caligraphic_482023aWu caligraphic_et caligraphic_al.Wu, caligraphic_Lu, caligraphic_Zhang, caligraphic_Jatowt, caligraphic_Feng, caligraphic_Sun, caligraphic_Wu, caligraphic_and caligraphic_KuangWu caligraphic_et caligraphic_al. caligraphic_(2023a)wu2023focus caligraphic_Yiquan caligraphic_Wu, caligraphic_Weiming caligraphic_Lu, caligraphic_Yating caligraphic_Zhang, caligraphic_Adam caligraphic_Jatowt, caligraphic_Jun caligraphic_Feng, caligraphic_Changlong caligraphic_Sun, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023a. caligraphic_Focus-aware caligraphic_response caligraphic_generation caligraphic_in caligraphic_inquiry caligraphic_conversation. caligraphic_In caligraphic_Findings caligraphic_of caligraphic_the caligraphic_Association caligraphic_for caligraphic_Computational caligraphic_Linguistics: caligraphic_ACL caligraphic_2023. caligraphic_12585–12599. caligraphic_492023bWu caligraphic_et caligraphic_al.Wu, caligraphic_Zhou, caligraphic_Liu, caligraphic_Lu, caligraphic_Liu, caligraphic_Zhang, caligraphic_Sun, caligraphic_Wu, caligraphic_and caligraphic_KuangWu caligraphic_et caligraphic_al. caligraphic_(2023b)wu2023precedent caligraphic_Yiquan caligraphic_Wu, caligraphic_Siying caligraphic_Zhou, caligraphic_Yifei caligraphic_Liu, caligraphic_Weiming caligraphic_Lu, caligraphic_Xiaozhong caligraphic_Liu, caligraphic_Yating caligraphic_Zhang, caligraphic_Changlong caligraphic_Sun, caligraphic_Fei caligraphic_Wu, caligraphic_and caligraphic_Kun caligraphic_Kuang. caligraphic_2023b. caligraphic_Precedent-Enhanced caligraphic_Legal caligraphic_Judgment caligraphic_Prediction caligraphic_with caligraphic_LLM caligraphic_and caligraphic_Domain-Model caligraphic_Collaboration. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2310.09241 caligraphic_(2023). caligraphic_502024Xinyu caligraphic_Lin caligraphic_and caligraphic_ChuaXinyu caligraphic_Lin caligraphic_and caligraphic_ChuaXinyu caligraphic_Lin caligraphic_and caligraphic_Chua caligraphic_(2024)lin2023temporally caligraphic_Jujia caligraphic_Zhao caligraphic_Yongqi caligraphic_Li caligraphic_Fuli caligraphic_Feng caligraphic_Xinyu caligraphic_Lin, caligraphic_Wenjie caligraphic_Wang caligraphic_and caligraphic_Tat-Seng caligraphic_Chua. caligraphic_2024. caligraphic_Temporally caligraphic_and caligraphic_Distributionally caligraphic_Robust caligraphic_Optimization caligraphic_for caligraphic_Cold-start caligraphic_Recommendation. caligraphic_In caligraphic_AAAI. caligraphic_512022bYan caligraphic_et caligraphic_al.Yan, caligraphic_Wang, caligraphic_Zhang, caligraphic_Li, caligraphic_Xu, caligraphic_and caligraphic_ZhengYan caligraphic_et caligraphic_al. caligraphic_(2022b)ref:apg_rs1 caligraphic_Bencheng caligraphic_Yan, caligraphic_Pengjie caligraphic_Wang, caligraphic_Kai caligraphic_Zhang, caligraphic_Feng caligraphic_Li, caligraphic_Jian caligraphic_Xu, caligraphic_and caligraphic_Bo caligraphic_Zheng. caligraphic_2022b. caligraphic_APG: caligraphic_Adaptive caligraphic_Parameter caligraphic_Generation caligraphic_Network caligraphic_for caligraphic_Click-Through caligraphic_Rate caligraphic_Prediction. caligraphic_In caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems. caligraphic_522022aYan caligraphic_et caligraphic_al.Yan, caligraphic_Niu, caligraphic_Gu, caligraphic_Wu, caligraphic_Tang, caligraphic_Hua, caligraphic_Lyu, caligraphic_and caligraphic_ChenYan caligraphic_et caligraphic_al. caligraphic_(2022a)ref:edge_cloud2 caligraphic_Yikai caligraphic_Yan, caligraphic_Chaoyue caligraphic_Niu, caligraphic_Renjie caligraphic_Gu, caligraphic_Fan caligraphic_Wu, caligraphic_Shaojie caligraphic_Tang, caligraphic_Lifeng caligraphic_Hua, caligraphic_Chengfei caligraphic_Lyu, caligraphic_and caligraphic_Guihai caligraphic_Chen. caligraphic_2022a. caligraphic_On-Device caligraphic_Learning caligraphic_for caligraphic_Model caligraphic_Personalization caligraphic_with caligraphic_Large-Scale caligraphic_Cloud-Coordinated caligraphic_Domain caligraphic_Adaption. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining, caligraphic_Washington, caligraphic_DC, caligraphic_USA, caligraphic_August caligraphic_14 caligraphic_- caligraphic_18, caligraphic_2022. caligraphic_2180–2190. caligraphic_532022aYao caligraphic_et caligraphic_al.Yao, caligraphic_Wang, caligraphic_Ding, caligraphic_Chen, caligraphic_Han, caligraphic_Zhou, caligraphic_and caligraphic_YangYao caligraphic_et caligraphic_al. caligraphic_(2022a)ref:edge_cloud caligraphic_Jiangchao caligraphic_Yao, caligraphic_Feng caligraphic_Wang, caligraphic_Xichen caligraphic_Ding, caligraphic_Shaohu caligraphic_Chen, caligraphic_Bo caligraphic_Han, caligraphic_Jingren caligraphic_Zhou, caligraphic_and caligraphic_Hongxia caligraphic_Yang. caligraphic_2022a. caligraphic_Device-cloud caligraphic_Collaborative caligraphic_Recommendation caligraphic_via caligraphic_Meta caligraphic_Controller. caligraphic_In caligraphic_KDD caligraphic_'22: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_SIGKDD caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_and caligraphic_Data caligraphic_Mining, caligraphic_Washington, caligraphic_DC, caligraphic_USA, caligraphic_August caligraphic_14 caligraphic_- caligraphic_18, caligraphic_2022. caligraphic_4353–4362. caligraphic_542022bYao caligraphic_et caligraphic_al.Yao, caligraphic_Zhang, caligraphic_Yao, caligraphic_Wang, caligraphic_Ma, caligraphic_Zhang, caligraphic_Chu, caligraphic_Ji, caligraphic_Jia, caligraphic_Shen, caligraphic_et caligraphic_al.Yao caligraphic_et caligraphic_al. caligraphic_(2022b)ref:edge_cloud_survey caligraphic_Jiangchao caligraphic_Yao, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Yang caligraphic_Yao, caligraphic_Feng caligraphic_Wang, caligraphic_Jianxin caligraphic_Ma, caligraphic_Jianwei caligraphic_Zhang, caligraphic_Yunfei caligraphic_Chu, caligraphic_Luo caligraphic_Ji, caligraphic_Kunyang caligraphic_Jia, caligraphic_Tao caligraphic_Shen, caligraphic_et caligraphic_al. caligraphic_2022b. caligraphic_Edge-Cloud caligraphic_Polarization caligraphic_and caligraphic_Collaboration: caligraphic_A caligraphic_Comprehensive caligraphic_Survey caligraphic_for caligraphic_AI. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Knowledge caligraphic_and caligraphic_Data caligraphic_Engineering caligraphic_(2022). caligraphic_552022aZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Kuang, caligraphic_Chen, caligraphic_Liu, caligraphic_Wu, caligraphic_and caligraphic_XiaoZhang caligraphic_et caligraphic_al. caligraphic_(2022a)zhang2022fairness caligraphic_Fengda caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Long caligraphic_Chen, caligraphic_Yuxuan caligraphic_Liu, caligraphic_Chao caligraphic_Wu, caligraphic_and caligraphic_Jun caligraphic_Xiao. caligraphic_2022a. caligraphic_Fairness-aware caligraphic_contrastive caligraphic_learning caligraphic_with caligraphic_partially caligraphic_annotated caligraphic_sensitive caligraphic_attributes. caligraphic_In caligraphic_The caligraphic_Eleventh caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Learning caligraphic_Representations. caligraphic_562023bZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Kuang, caligraphic_Chen, caligraphic_You, caligraphic_Shen, caligraphic_Xiao, caligraphic_Zhang, caligraphic_Wu, caligraphic_Wu, caligraphic_Zhuang, caligraphic_et caligraphic_al.Zhang caligraphic_et caligraphic_al. caligraphic_(2023b)zhang2023federated caligraphic_Fengda caligraphic_Zhang, caligraphic_Kun caligraphic_Kuang, caligraphic_Long caligraphic_Chen, caligraphic_Zhaoyang caligraphic_You, caligraphic_Tao caligraphic_Shen, caligraphic_Jun caligraphic_Xiao, caligraphic_Yin caligraphic_Zhang, caligraphic_Chao caligraphic_Wu, caligraphic_Fei caligraphic_Wu, caligraphic_Yueting caligraphic_Zhuang, caligraphic_et caligraphic_al. caligraphic_2023b. caligraphic_Federated caligraphic_unsupervised caligraphic_representation caligraphic_learning. caligraphic_Frontiers caligraphic_of caligraphic_Information caligraphic_Technology caligraphic_& caligraphic_Electronic caligraphic_Engineering caligraphic_24, caligraphic_8 caligraphic_(2023), caligraphic_1181–1193. caligraphic_572023aZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Feng, caligraphic_Kuang, caligraphic_Zhang, caligraphic_Zhao, caligraphic_Yang, caligraphic_Chua, caligraphic_and caligraphic_WuZhang caligraphic_et caligraphic_al. caligraphic_(2023a)zhangsy2023personalized caligraphic_Shengyu caligraphic_Zhang, caligraphic_Fuli caligraphic_Feng, caligraphic_Kun caligraphic_Kuang, caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Hongxia caligraphic_Yang, caligraphic_Tat-Seng caligraphic_Chua, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2023a. caligraphic_Personalized caligraphic_Latent caligraphic_Structure caligraphic_Learning caligraphic_for caligraphic_Recommendation. caligraphic_IEEE caligraphic_Transactions caligraphic_on caligraphic_Pattern caligraphic_Analysis caligraphic_and caligraphic_Machine caligraphic_Intelligence caligraphic_(2023). caligraphic_582020Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Jiang, caligraphic_Wang, caligraphic_Kuang, caligraphic_Zhao, caligraphic_Zhu, caligraphic_Yu, caligraphic_Yang, caligraphic_and caligraphic_WuZhang caligraphic_et caligraphic_al. caligraphic_(2020)zhangsyDBLP:conf/mm/ZhangJWKZZYYW20 caligraphic_Shengyu caligraphic_Zhang, caligraphic_Tan caligraphic_Jiang, caligraphic_Tan caligraphic_Wang, caligraphic_Kun caligraphic_Kuang, caligraphic_Zhou caligraphic_Zhao, caligraphic_Jianke caligraphic_Zhu, caligraphic_Jin caligraphic_Yu, caligraphic_Hongxia caligraphic_Yang, caligraphic_and caligraphic_Fei caligraphic_Wu. caligraphic_2020. caligraphic_DeVLBert: caligraphic_Learning caligraphic_Deconfounded caligraphic_Visio-Linguistic caligraphic_Representations. caligraphic_In caligraphic_MM caligraphic_'20: caligraphic_The caligraphic_28th caligraphic_ACM caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Multimedia. caligraphic_ACM, caligraphic_4373–4382. caligraphic_592023cZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Liu, caligraphic_Zeng, caligraphic_Ooi, caligraphic_Tang, caligraphic_and caligraphic_ZhuangZhang caligraphic_et caligraphic_al. caligraphic_(2023c)zhang2023learning caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Changshuo caligraphic_Liu, caligraphic_Lingze caligraphic_Zeng, caligraphic_Bengchin caligraphic_Ooi, caligraphic_Siliang caligraphic_Tang, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2023c. caligraphic_Learning caligraphic_in caligraphic_Imperfect caligraphic_Environment: caligraphic_Multi-Label caligraphic_Classification caligraphic_with caligraphic_Long-Tailed caligraphic_Distribution caligraphic_and caligraphic_Partial caligraphic_Labels. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision. caligraphic_1423–1432. caligraphic_602024Zhang caligraphic_and caligraphic_LvZhang caligraphic_and caligraphic_LvZhang caligraphic_and caligraphic_Lv caligraphic_(2024)zhang2024revisiting caligraphic_Wenqiao caligraphic_Zhang caligraphic_and caligraphic_Zheqi caligraphic_Lv. caligraphic_2024. caligraphic_Revisiting caligraphic_the caligraphic_Domain caligraphic_Shift caligraphic_and caligraphic_Sample caligraphic_Uncertainty caligraphic_in caligraphic_Multi-source caligraphic_Active caligraphic_Domain caligraphic_Transfer. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_612021Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Shi, caligraphic_Guo, caligraphic_Zhang, caligraphic_Cai, caligraphic_Li, caligraphic_Luo, caligraphic_and caligraphic_ZhuangZhang caligraphic_et caligraphic_al. caligraphic_(2021)zhang2021magic caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Haochen caligraphic_Shi, caligraphic_Jiannan caligraphic_Guo, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Qingpeng caligraphic_Cai, caligraphic_Juncheng caligraphic_Li, caligraphic_Sihui caligraphic_Luo, caligraphic_and caligraphic_Yueting caligraphic_Zhuang. caligraphic_2021. caligraphic_MAGIC: caligraphic_Multimodal caligraphic_relAtional caligraphic_Graph caligraphic_adversarIal caligraphic_inferenCe caligraphic_for caligraphic_Diverse caligraphic_and caligraphic_Unpaired caligraphic_Text-based caligraphic_Image caligraphic_Captioning. caligraphic_arXiv caligraphic_preprint caligraphic_arXiv:2112.06558 caligraphic_(2021). caligraphic_622022bZhang caligraphic_et caligraphic_al.Zhang, caligraphic_Zhu, caligraphic_Hallinan, caligraphic_Zhang, caligraphic_Makmur, caligraphic_Cai, caligraphic_and caligraphic_OoiZhang caligraphic_et caligraphic_al. caligraphic_(2022b)zhang2022boostmis caligraphic_Wenqiao caligraphic_Zhang, caligraphic_Lei caligraphic_Zhu, caligraphic_James caligraphic_Hallinan, caligraphic_Shengyu caligraphic_Zhang, caligraphic_Andrew caligraphic_Makmur, caligraphic_Qingpeng caligraphic_Cai, caligraphic_and caligraphic_Beng caligraphic_Chin caligraphic_Ooi. caligraphic_2022b. caligraphic_Boostmis: caligraphic_Boosting caligraphic_medical caligraphic_image caligraphic_semi-supervised caligraphic_learning caligraphic_with caligraphic_adaptive caligraphic_pseudo caligraphic_labeling caligraphic_and caligraphic_informative caligraphic_active caligraphic_annotation. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision caligraphic_and caligraphic_Pattern caligraphic_Recognition. caligraphic_20666–20676. caligraphic_632024Zhang caligraphic_et caligraphic_al.Zhang, caligraphic_Zhu, caligraphic_Song, caligraphic_Koniusz, caligraphic_King, caligraphic_et caligraphic_al.Zhang caligraphic_et caligraphic_al. caligraphic_(2024)zhang2024mitigating caligraphic_Yifei caligraphic_Zhang, caligraphic_Hao caligraphic_Zhu, caligraphic_Zixing caligraphic_Song, caligraphic_Piotr caligraphic_Koniusz, caligraphic_Irwin caligraphic_King, caligraphic_et caligraphic_al. caligraphic_2024. caligraphic_Mitigating caligraphic_the caligraphic_Popularity caligraphic_Bias caligraphic_of caligraphic_Graph caligraphic_Collaborative caligraphic_Filtering: caligraphic_A caligraphic_Dimensional caligraphic_Collapse caligraphic_Perspective. caligraphic_Advances caligraphic_in caligraphic_Neural caligraphic_Information caligraphic_Processing caligraphic_Systems caligraphic_36 caligraphic_(2024). caligraphic_642018Zhou caligraphic_et caligraphic_al.Zhou, caligraphic_Zhu, caligraphic_Song, caligraphic_Fan, caligraphic_Zhu, caligraphic_Ma, caligraphic_Yan, caligraphic_Jin, caligraphic_Li, caligraphic_and caligraphic_GaiZhou caligraphic_et caligraphic_al. caligraphic_(2018)ref:din caligraphic_Guorui caligraphic_Zhou, caligraphic_Xiaoqiang caligraphic_Zhu, caligraphic_Chenru caligraphic_Song, caligraphic_Ying caligraphic_Fan, caligraphic_Han caligraphic_Zhu, caligraphic_Xiao caligraphic_Ma, caligraphic_Yanghui caligraphic_Yan, caligraphic_Junqi caligraphic_Jin, caligraphic_Han caligraphic_Li, caligraphic_and caligraphic_Kun caligraphic_Gai. caligraphic_2018. caligraphic_Deep caligraphic_interest caligraphic_network caligraphic_for caligraphic_click-through caligraphic_rate caligraphic_prediction. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_24th caligraphic_ACM caligraphic_SIGKDD caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Knowledge caligraphic_Discovery caligraphic_& caligraphic_Data caligraphic_Mining. caligraphic_1059–1068. caligraphic_652023aZhu caligraphic_et caligraphic_al.Zhu, caligraphic_Li, caligraphic_Shao, caligraphic_Hao, caligraphic_Wu, caligraphic_Kuang, caligraphic_Xiao, caligraphic_and caligraphic_WuZhu caligraphic_et caligraphic_al. caligraphic_(2023a)DBLP:conf/mm/ZhuL0HWK0W23 caligraphic_Didi caligraphic_Zhu, caligraphic_Yinchuan caligraphic_Li, caligraphic_Yunfeng caligraphic_Shao, caligraphic_Jianye caligraphic_Hao, caligraphic_Fei caligraphic_Wu, caligraphic_Kun caligraphic_Kuang, caligraphic_Jun caligraphic_Xiao, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2023a. caligraphic_Generalized caligraphic_Universal caligraphic_Domain caligraphic_Adaptation caligraphic_with caligraphic_Generative caligraphic_Flow caligraphic_Networks. caligraphic_In caligraphic_ACM caligraphic_Multimedia. caligraphic_ACM, caligraphic_8304–8315. caligraphic_662023bZhu caligraphic_et caligraphic_al.Zhu, caligraphic_Li, caligraphic_Yuan, caligraphic_Li, caligraphic_Kuang, caligraphic_and caligraphic_WuZhu caligraphic_et caligraphic_al. caligraphic_(2023b)zhu2023universal caligraphic_Didi caligraphic_Zhu, caligraphic_Yinchuan caligraphic_Li, caligraphic_Junkun caligraphic_Yuan, caligraphic_Zexi caligraphic_Li, caligraphic_Kun caligraphic_Kuang, caligraphic_and caligraphic_Chao caligraphic_Wu. caligraphic_2023b. caligraphic_Universal caligraphic_domain caligraphic_adaptation caligraphic_via caligraphic_compressive caligraphic_attention caligraphic_matching. caligraphic_In caligraphic_Proceedings caligraphic_of caligraphic_the caligraphic_IEEE/CVF caligraphic_International caligraphic_Conference caligraphic_on caligraphic_Computer caligraphic_Vision. caligraphic_6974–6985. caligraphic_Appendix caligraphic_AAppendix caligraphic_AAAppendix caligraphic_AAppendix caligraphic_AAppendixAAppendixThis caligraphic_is caligraphic_the caligraphic_Appendix caligraphic_for caligraphic_``Intelligent caligraphic_Model caligraphic_Update caligraphic_Strategy caligraphic_for caligraphic_Sequential caligraphic_Recommendation''.A.1subsection caligraphic_A.1A.1§A.1A.1Supplementary caligraphic_MethodA.1Supplementary caligraphic_MethodA.1.1subsubsection caligraphic_A.1.1A.1.1§A.1.1A.1.1Notations caligraphic_and caligraphic_DefinitionsA.1.1Notations caligraphic_and caligraphic_DefinitionsWe caligraphic_summarize caligraphic_notations caligraphic_and caligraphic_definitions caligraphic_in caligraphic_the caligraphic_Table caligraphic_.Table caligraphic_2Table caligraphic_22Table caligraphic_22Notations caligraphic_and caligraphic_DefinitionsTable caligraphic_2Notations caligraphic_and caligraphic_DefinitionsNotationDefinitionuUservItemsBehavior caligraphic_sequencedEdgeD={d(i)}i=1NdSet caligraphic_of caligraphic_edgesSH(i), caligraphic_SR(i), caligraphic_SMRDHistory caligraphic_samples, caligraphic_Real-time caligraphic_samples, caligraphic_MRD caligraphic_samplesNd, caligraphic_NH(i) caligraphic_and caligraphic_NR(i)The caligraphic_number caligraphic_of caligraphic_edges, caligraphic_The caligraphic_number caligraphic_of caligraphic_history caligraphic_data, caligraphic_The caligraphic_number caligraphic_of caligraphic_real-time caligraphic_dataΘg, caligraphic_Θd, caligraphic_ΘMRDParameters caligraphic_of caligraphic_the caligraphic_global caligraphic_cloud caligraphic_model, caligraphic_Parameters caligraphic_of caligraphic_the caligraphic_local caligraphic_edge caligraphic_modelMg(⋅;Θg), caligraphic_Md(i)(⋅;Θd(i)), caligraphic_Mc(i)t(SMRD;ΘMRD)Global caligraphic_cloud caligraphic_model, caligraphic_Local caligraphic_edge caligraphic_recommendation caligraphic_model, caligraphic_Local caligraphic_edge caligraphic_control caligraphic_modelLrec, caligraphic_LMRDLoss caligraphic_function caligraphic_of caligraphic_recommendation, caligraphic_Loss caligraphic_function caligraphic_of caligraphic_mis-recommendationΩFeature caligraphic_extractorA.1.2subsubsection caligraphic_A.1.2A.1.2§A.1.2A.1.2Optimization caligraphic_TargetA.1.2Optimization caligraphic_TargetTo caligraphic_describe caligraphic_it caligraphic_in caligraphic_the caligraphic_simplest caligraphic_way, caligraphic_we caligraphic_assume caligraphic_that caligraphic_the caligraphic_set caligraphic_of caligraphic_the caligraphic_edges caligraphic_is caligraphic_D={d(i)}i=1Nd, caligraphic_the caligraphic_set caligraphic_updated caligraphic_using caligraphic_the caligraphic_baseline caligraphic_method caligraphic_is caligraphic_D′u={d(i)}i=1N′u, caligraphic_the caligraphic_set caligraphic_updated caligraphic_using caligraphic_our caligraphic_method caligraphic_is caligraphic_Du={d(i)}i=1Nu. caligraphic_Nd, caligraphic_N′u, caligraphic_and caligraphic_Nu caligraphic_are caligraphic_the caligraphic_amount caligraphic_of caligraphic_the caligraphic_D, caligraphic_D′u caligraphic_and caligraphic_Du, caligraphic_respectively. caligraphic_The caligraphic_communication caligraphic_upper caligraphic_bound caligraphic_is caligraphic_set caligraphic_to caligraphic_Nthres. caligraphic_Suppose caligraphic_the caligraphic_ground-truth caligraphic_value caligraphic_y, caligraphic_and caligraphic_the caligraphic_prediction caligraphic_of caligraphic_the caligraphic_baseline caligraphic_models caligraphic_^y′, caligraphic_and caligraphic_the caligraphic_prediction caligraphic_of caligraphic_our caligraphic_model caligraphic_^y caligraphic_are caligraphic_row caligraphic_vectors. caligraphic_Therefore, caligraphic_our caligraphic_optimization caligraphic_target caligraphic_is caligraphic_to caligraphic_obtain caligraphic_the caligraphic_highest caligraphic_performance caligraphic_of caligraphic_the caligraphic_model caligraphic_while caligraphic_limiting caligraphic_the caligraphic_upper caligraphic_bound caligraphic_of caligraphic_the caligraphic_communication caligraphic_frequency.(21)Equation caligraphic_2121Maximize^yyT,Maximize^yyT,Subject caligraphic_to0≤Nu≤Nthres,Subject caligraphic_to0≤Nu≤Nthres,Nu≤N′u,Nu≤N′u,Du⊂D.Du⊂D.In caligraphic_this caligraphic_case, caligraphic_the caligraphic_improvement caligraphic_of caligraphic_our caligraphic_method caligraphic_is caligraphic_Δ=^yyT-^y′yT.Or caligraphic_it caligraphic_can caligraphic_also caligraphic_be caligraphic_regarded caligraphic_as caligraphic_reducing caligraphic_the caligraphic_communication caligraphic_frequency caligraphic_without caligraphic_degrading caligraphic_performance.(22)Equation caligraphic_2222MinimizeNuMinimizeNuSubject caligraphic_to0≤Nu≤Nthres,Subject caligraphic_to0≤Nu≤Nthres,^yyT≥^y′yT,^yyT≥^y′yT,Du⊂DDu⊂DIn caligraphic_this caligraphic_case, caligraphic_the caligraphic_improvement caligraphic_of caligraphic_our caligraphic_method caligraphic_is caligraphic_Δ=N-Nu.A.2subsection caligraphic_A.2A.2§A.2A.2Supplementary caligraphic_Experimental caligraphic_ResultsA.2Supplementary caligraphic_Experimental caligraphic_ResultsA.2.1subsubsection caligraphic_A.2.1A.2.1§A.2.1A.2.1Datasets.A.2.1Datasets.We caligraphic_evaluate caligraphic_IntellectReq caligraphic_and caligraphic_baselines caligraphic_on caligraphic_Amazon caligraphic_CDs caligraphic_(CDs) caligraphic_2footnote caligraphic_22footnote caligraphic_2https://jmcauley.ucsd.edu/data/amazon/, caligraphic_Amazon caligraphic_Electronic caligraphic_(Electronic) caligraphic_, caligraphic_Douban caligraphic_Book caligraphic_(Book) caligraphic_3footnote caligraphic_33footnote caligraphic_3https://www.kaggle.com/datasets/fengzhujoey/douban-datasetratingreviewside-information, caligraphic_three caligraphic_widely caligraphic_used caligraphic_public caligraphic_benchmarks caligraphic_in caligraphic_the caligraphic_recommendation caligraphic_tasks, caligraphic_Table caligraphic_shows caligraphic_the caligraphic_statistics. caligraphic_Following caligraphic_conventional caligraphic_practice, caligraphic_all caligraphic_user-item caligraphic_pairs caligraphic_in caligraphic_the caligraphic_dataset caligraphic_are caligraphic_treated caligraphic_as caligraphic_positive caligraphic_samples. caligraphic_To caligraphic_conduct caligraphic_sequential caligraphic_recommendation caligraphic_experiments, caligraphic_we caligraphic_arrange caligraphic_the caligraphic_items caligraphic_clicked caligraphic_by caligraphic_the caligraphic_user caligraphic_into caligraphic_a caligraphic_sequence caligraphic_in caligraphic_the caligraphic_order caligraphic_of caligraphic_timestamps. caligraphic_We caligraphic_also caligraphic_refer caligraphic_to caligraphic_(, caligraphic_), caligraphic_which caligraphic_is caligraphic_negatively caligraphic_sampled caligraphic_at caligraphic_1:4 caligraphic_and caligraphic_1:99 caligraphic_in caligraphic_the caligraphic_training caligraphic_set caligraphic_and caligraphic_testing caligraphic_set, caligraphic_respectively. caligraphic_Negative caligraphic_sampling caligraphic_considers caligraphic_all caligraphic_user-item caligraphic_pairs caligraphic_that caligraphic_do caligraphic_not caligraphic_exist caligraphic_in caligraphic_the caligraphic_dataset caligraphic_as caligraphic_negative caligraphic_samples.Table caligraphic_3Table caligraphic_33Table caligraphic_33Statistics caligraphic_of caligraphic_Datasets.Table caligraphic_3Statistics caligraphic_of caligraphic_Datasets.Amazon caligraphic_CDsAmazon caligraphic_ElectronicDouban caligraphic_Books#User1,578,5974,201,69646,549#Item486,360476,002212,996#Interaction3,749,0047,824,4821,861,533#Density0.00000490.00000390.0002746A.2.2subsubsection caligraphic_A.2.2A.2.2§A.2.2A.2.2Evaluation caligraphic_MetricsA.2.2Evaluation caligraphic_MetricsIn caligraphic_the caligraphic_experiments, caligraphic_we caligraphic_use caligraphic_the caligraphic_widely caligraphic_adopted caligraphic_AUC, caligraphic_Logloss, caligraphic_HitRate caligraphic_and caligraphic_NDCG caligraphic_as caligraphic_the caligraphic_metrics caligraphic_to caligraphic_evaluate caligraphic_model caligraphic_performance. caligraphic_They caligraphic_are caligraphic_defined caligraphic_by caligraphic_the caligraphic_following caligraphic_equations.(23)Equation caligraphic_2323AUC=∑x0∈DT∑x1∈DF1[f(x1)<f(x0)]|DT||DF|,AUC=∑x0∈DT∑x1∈DF1[f(x1)<f(x0)]|DT||DF|,(24)Equation caligraphic_2424UAUC=1|U|∑u∈U∑x0∈DuT∑x1∈DuF1[f(x1)<f(x0)]|DuT||DuF|,UAUC=1|U|∑u∈U∑x0∈DuT∑x1∈DuF1[f(x1)<f(x0)]|DuT||DuF|,(25)Equation caligraphic_2525NDCG@K=∑u∈U1|U|21(Ru,gu≤K)-1log2(1(Ru,gu≤K)+1),NDCG@K=∑u∈U1|U|21(Ru,gu≤K)-1log2(1(Ru,gu≤K)+1),(26)Equation caligraphic_2626HitRate@K=1|U|∑u∈U1(Ru,gu≤K),HitRate@K=1|U|∑u∈U1(Ru,gu≤K),In caligraphic_the caligraphic_equation caligraphic_above, caligraphic_1(⋅) caligraphic_is caligraphic_the caligraphic_indicator caligraphic_function. caligraphic_f caligraphic_is caligraphic_the caligraphic_model caligraphic_to caligraphic_be caligraphic_evaluated. caligraphic_Ru,gu caligraphic_is caligraphic_the caligraphic_rank caligraphic_predicted caligraphic_by caligraphic_the caligraphic_model caligraphic_for caligraphic_the caligraphic_ground caligraphic_truth caligraphic_item caligraphic_gu caligraphic_and caligraphic_user caligraphic_u. caligraphic_DT, caligraphic_DF caligraphic_is caligraphic_the caligraphic_positive caligraphic_and caligraphic_negative caligraphic_testing caligraphic_sample caligraphic_set, caligraphic_respectively, caligraphic_and caligraphic_DuT, caligraphic_DuF caligraphic_is caligraphic_the caligraphic_positive caligraphic_and caligraphic_negative caligraphic_testing caligraphic_sample caligraphic_set caligraphic_for caligraphic_user caligraphic_u caligraphic_respectively.A.2.3subsubsection caligraphic_A.2.3A.2.3§A.2.3A.2.3Request caligraphic_Frequency caligraphic_and caligraphic_ThresholdA.2.3Request caligraphic_Frequency caligraphic_and caligraphic_ThresholdFigure caligraphic_shows caligraphic_that caligraphic_the caligraphic_relationship caligraphic_between caligraphic_request caligraphic_frequency caligraphic_and caligraphic_different caligraphic_threshold.Figure caligraphic_10Figure caligraphic_1010Figure caligraphic_1010Request caligraphic_frequency caligraphic_w.r.t. caligraphic_different caligraphic_thresholdFigure caligraphic_10Request caligraphic_frequency caligraphic_w.r.t. caligraphic_different caligraphic_thresholdA.3subsection caligraphic_A.3A.3§A.3A.3Training caligraphic_Procedure caligraphic_and caligraphic_Inference caligraphic_ProcedureA.3Training caligraphic_Procedure caligraphic_and caligraphic_Inference caligraphic_ProcedureIn caligraphic_this caligraphic_section, caligraphic_we caligraphic_describe caligraphic_the caligraphic_overall caligraphic_pipeline caligraphic_in caligraphic_detail caligraphic_in caligraphic_conjunction caligraphic_with caligraphic_Figure caligraphic_.Figure caligraphic_11Figure caligraphic_1111Figure caligraphic_1111The caligraphic_overall caligraphic_pipeline caligraphic_of caligraphic_our caligraphic_proposed caligraphic_IntellectReq.Figure caligraphic_11The caligraphic_overall caligraphic_pipeline caligraphic_of caligraphic_our caligraphic_proposed caligraphic_IntellectReq.1. caligraphic_Training caligraphic_Procedure① caligraphic_We caligraphic_first caligraphic_pre-trained caligraphic_a caligraphic_EC-CDR caligraphic_framework, caligraphic_and caligraphic_EC-CDR caligraphic_can caligraphic_use caligraphic_data caligraphic_to caligraphic_generate caligraphic_model caligraphic_parameters.② caligraphic_MRD caligraphic_training caligraphic_procedure. caligraphic_1) caligraphic_Construct caligraphic_the caligraphic_MRD caligraphic_dataset. caligraphic_We caligraphic_assume caligraphic_that caligraphic_the caligraphic_time caligraphic_at caligraphic_this caligraphic_time caligraphic_is caligraphic_T, caligraphic_and caligraphic_then caligraphic_we caligraphic_use caligraphic_the caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_by caligraphic_the caligraphic_data caligraphic_at caligraphic_moment caligraphic_t=0 caligraphic_under caligraphic_the caligraphic_EC-CDR caligraphic_framework, caligraphic_and caligraphic_the caligraphic_model caligraphic_is caligraphic_applied caligraphic_to caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_current caligraphic_moment caligraphic_t=T. caligraphic_At caligraphic_this caligraphic_point, caligraphic_we caligraphic_can caligraphic_get caligraphic_a caligraphic_prediction caligraphic_result caligraphic_^y, caligraphic_compare caligraphic_^y caligraphic_with caligraphic_y caligraphic_to caligraphic_get caligraphic_whether caligraphic_the caligraphic_model caligraphic_do caligraphic_mis-recommendation. caligraphic_We caligraphic_then caligraphic_repeat caligraphic_the caligraphic_data caligraphic_used caligraphic_for caligraphic_parameter caligraphic_generation caligraphic_from caligraphic_t=0 caligraphic_to caligraphic_t=T-1, caligraphic_which caligraphic_constructs caligraphic_an caligraphic_MRD caligraphic_dataset. caligraphic_It caligraphic_contains caligraphic_three caligraphic_columns, caligraphic_namely: caligraphic_the caligraphic_data caligraphic_used caligraphic_for caligraphic_parameter caligraphic_generation caligraphic_(x1), caligraphic_the caligraphic_current caligraphic_data caligraphic_(x2), caligraphic_and caligraphic_whether caligraphic_it caligraphic_do caligraphic_mis-recommendation caligraphic_(yMRD). caligraphic_2) caligraphic_Train caligraphic_MRD. caligraphic_MRD caligraphic_is caligraphic_a caligraphic_fully caligraphic_connected caligraphic_neural caligraphic_network caligraphic_that caligraphic_takes caligraphic_x1 caligraphic_and caligraphic_x2 caligraphic_as caligraphic_input caligraphic_and caligraphic_fits caligraphic_the caligraphic_mis-recommendation caligraphic_label caligraphic_yMRD. caligraphic_And caligraphic_then caligraphic_we caligraphic_get caligraphic_the caligraphic_MRD. caligraphic_MRD caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_determine caligraphic_whether caligraphic_the caligraphic_model caligraphic_parameters caligraphic_generated caligraphic_using caligraphic_the caligraphic_data caligraphic_at caligraphic_a caligraphic_certain caligraphic_moment caligraphic_before caligraphic_are caligraphic_still caligraphic_valid caligraphic_for caligraphic_the caligraphic_current caligraphic_data. caligraphic_The caligraphic_prediction caligraphic_result caligraphic_output caligraphic_by caligraphic_MRD caligraphic_can caligraphic_be caligraphic_simply caligraphic_considered caligraphic_as caligraphic_Mis-Recommendation caligraphic_Score caligraphic_(MRS).③ caligraphic_DM caligraphic_training caligraphic_procedure. caligraphic_We caligraphic_map caligraphic_the caligraphic_data caligraphic_into caligraphic_a caligraphic_Gaussian caligraphic_distribution caligraphic_through caligraphic_the caligraphic_Conditional-VAE caligraphic_method, caligraphic_and caligraphic_then caligraphic_sample caligraphic_the caligraphic_feature caligraphic_vector caligraphic_from caligraphic_the caligraphic_distribution caligraphic_to caligraphic_complete caligraphic_the caligraphic_next-item caligraphic_prediction caligraphic_task, caligraphic_that caligraphic_is, caligraphic_to caligraphic_predict caligraphic_the caligraphic_item caligraphic_that caligraphic_the caligraphic_user caligraphic_will caligraphic_click caligraphic_next. caligraphic_Then caligraphic_we caligraphic_can caligraphic_get caligraphic_DM. caligraphic_DM caligraphic_can caligraphic_calculate caligraphic_multiple caligraphic_next-items caligraphic_by caligraphic_sampling caligraphic_from caligraphic_the caligraphic_distribution caligraphic_multiple caligraphic_times, caligraphic_which caligraphic_can caligraphic_be caligraphic_used caligraphic_to caligraphic_calculate caligraphic_Uncertainty.④ caligraphic_Joint caligraphic_training caligraphic_procedure caligraphic_of caligraphic_MRD caligraphic_and caligraphic_DM. caligraphic_We caligraphic_use caligraphic_a caligraphic_fully caligraphic_connected caligraphic_neural caligraphic_network, caligraphic_denoted caligraphic_as caligraphic_f(⋅), caligraphic_and caligraphic_use caligraphic_MRS caligraphic_and caligraphic_Uncertainty caligraphic_as caligraphic_input caligraphic_to caligraphic_fit caligraphic_yMRD caligraphic_in caligraphic_the caligraphic_MRD caligraphic_dataset, caligraphic_which caligraphic_is caligraphic_the caligraphic_Mis-Recommendation caligraphic_Label.2. caligraphic_Inference caligraphic_ProcedureThe caligraphic_MRS caligraphic_is caligraphic_calculated caligraphic_using caligraphic_all caligraphic_recent caligraphic_user caligraphic_data caligraphic_on caligraphic_the caligraphic_cloud, caligraphic_and caligraphic_the caligraphic_threshold caligraphic_of caligraphic_the caligraphic_MRS caligraphic_is caligraphic_determined caligraphic_according caligraphic_to caligraphic_the caligraphic_load. caligraphic_Then caligraphic_send caligraphic_this caligraphic_threshold caligraphic_to caligraphic_each caligraphic_edge. caligraphic_The caligraphic_edge caligraphic_has caligraphic_updated caligraphic_the caligraphic_model caligraphic_at caligraphic_a caligraphic_certain caligraphic_moment caligraphic_t=n,n<T caligraphic_before, caligraphic_and caligraphic_now caligraphic_whether caligraphic_it caligraphic_is caligraphic_necessary caligraphic_to caligraphic_continue caligraphic_to caligraphic_update caligraphic_the caligraphic_model caligraphic_at caligraphic_moment caligraphic_t=T, caligraphic_that caligraphic_is, caligraphic_whether caligraphic_the caligraphic_model caligraphic_is caligraphic_invalid caligraphic_for caligraphic_the caligraphic_current caligraphic_data caligraphic_distribution? caligraphic_We caligraphic_only caligraphic_need caligraphic_to caligraphic_input caligraphic_the caligraphic_MRD caligraphic_and caligraphic_Uncertainty caligraphic_calculated caligraphic_by caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_moment caligraphic_t=n caligraphic_data caligraphic_and caligraphic_the caligraphic_data caligraphic_at caligraphic_the caligraphic_moment caligraphic_t=T caligraphic_into caligraphic_f(⋅) caligraphic_for caligraphic_determine. caligraphic_In caligraphic_fact, caligraphic_what caligraphic_we caligraphic_output caligraphic_is caligraphic_a caligraphic_invalid caligraphic_degree, caligraphic_which caligraphic_is caligraphic_a caligraphic_continuous caligraphic_value caligraphic_between caligraphic_0 caligraphic_and caligraphic_1. caligraphic_Whether caligraphic_to caligraphic_update caligraphic_the caligraphic_edge caligraphic_model caligraphic_depends caligraphic_on caligraphic_the caligraphic_threshold caligraphic_calculated caligraphic_on caligraphic_the caligraphic_cloud caligraphic_based caligraphic_on caligraphic_the caligraphic_load.A.4subsection caligraphic_A.4A.4§A.4A.4Hyperparameters caligraphic_and caligraphic_Training caligraphic_SchedulesA.4Hyperparameters caligraphic_and caligraphic_Training caligraphic_SchedulesWe caligraphic_summarize caligraphic_the caligraphic_hyperparameters caligraphic_and caligraphic_training caligraphic_schedules caligraphic_of caligraphic_IntellectReq caligraphic_on caligraphic_the caligraphic_three caligraphic_datasets caligraphic_in caligraphic_Table caligraphic_.Table caligraphic_4Table caligraphic_44Table caligraphic_44Hyperparameters caligraphic_and caligraphic_training caligraphic_schedules.Table caligraphic_4Hyperparameters caligraphic_and caligraphic_training caligraphic_schedules.DatasetParametersSetting caligraphic_Amazon caligraphic_CDsAmazon caligraphic_ElectronicDouban caligraphic_Book caligraphic_GPUTesla caligraphic_A100OptimizerAdam caligraphic_Learning caligraphic_rate0.001 caligraphic_Batch caligraphic_size1024 caligraphic_Sequence caligraphic_length30 caligraphic_the caligraphic_Dimension caligraphic_of caligraphic_z1×64N32n10A.4.1subsubsection caligraphic_A.4.1A.4.1§A.4.1A.4.1Impact caligraphic_on caligraphic_the caligraphic_Real caligraphic_World.A.4.1Impact caligraphic_on caligraphic_the caligraphic_Real caligraphic_World.A caligraphic_case caligraphic_based caligraphic_on caligraphic_a caligraphic_dynamic caligraphic_model caligraphic_from caligraphic_the caligraphic_previous caligraphic_moment caligraphic_is caligraphic_as caligraphic_follows. caligraphic_If caligraphic_it caligraphic_were caligraphic_based caligraphic_on caligraphic_a caligraphic_on-edge caligraphic_static caligraphic_model, caligraphic_the caligraphic_improvement caligraphic_would caligraphic_be caligraphic_much caligraphic_more caligraphic_significant. caligraphic_We caligraphic_found caligraphic_some caligraphic_more caligraphic_intuitive caligraphic_data caligraphic_and caligraphic_examples caligraphic_to caligraphic_show caligraphic_the caligraphic_challenge caligraphic_and caligraphic_IntellectReq's caligraphic_impact caligraphic_on caligraphic_the caligraphic_real caligraphic_world:Table caligraphic_5Table caligraphic_55Table caligraphic_55IntellectReq's caligraphic_Impact caligraphic_on caligraphic_Real caligraphic_World.Table caligraphic_5IntellectReq's caligraphic_Impact caligraphic_on caligraphic_Real caligraphic_World.GoogleAlibabaBytesFLOPsBytesFLOPsEC-CDR4.69GB152.46G53.19GB1.68TIntellectReq3.79GB123.49G43.08GB1.36TΔ19.2%(1) caligraphic_We caligraphic_calculate caligraphic_the caligraphic_number caligraphic_of caligraphic_bytes caligraphic_and caligraphic_FLOPs caligraphic_required caligraphic_to caligraphic_update caligraphic_a caligraphic_parameter. caligraphic_Bytes: caligraphic_48.5kB, caligraphic_FLOPs: caligraphic_1.53M. caligraphic_That caligraphic_is, caligraphic_updating caligraphic_a caligraphic_model caligraphic_on caligraphic_the caligraphic_edge caligraphic_requires caligraphic_the caligraphic_transmission caligraphic_of caligraphic_48.5kB caligraphic_data caligraphic_through caligraphic_edge-cloud caligraphic_communication, caligraphic_and caligraphic_consumes caligraphic_1.53M caligraphic_computing caligraphic_power caligraphic_of caligraphic_the caligraphic_cloud caligraphic_model. caligraphic_(2) caligraphic_According caligraphic_to caligraphic_the caligraphic_report, caligraphic_Google caligraphic_processes caligraphic_99,000 caligraphic_clicks caligraphic_per caligraphic_second, caligraphic_so caligraphic_it caligraphic_needs caligraphic_to caligraphic_transmit caligraphic_48.5kB∗99k=4.69GB caligraphic_per caligraphic_second, caligraphic_and caligraphic_consume caligraphic_1.53M∗99k=152.46G caligraphic_of caligraphic_computing caligraphic_power caligraphic_in caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_Alibaba caligraphic_processes caligraphic_1,150,000 caligraphic_clicks caligraphic_per caligraphic_second, caligraphic_so caligraphic_it caligraphic_needs caligraphic_to caligraphic_transmit caligraphic_48.5kB∗1150k=53.19GB caligraphic_per caligraphic_second, caligraphic_and caligraphic_consume caligraphic_1.53M∗1150k=1.68T caligraphic_of caligraphic_computing caligraphic_power caligraphic_in caligraphic_the caligraphic_cloud caligraphic_server. caligraphic_These caligraphic_are caligraphic_not caligraphic_the caligraphic_peak caligraphic_value caligraphic_yet. caligraphic_Obviously, caligraphic_such caligraphic_a caligraphic_huge caligraphic_loan caligraphic_and caligraphic_computing caligraphic_power caligraphic_consumption caligraphic_make caligraphic_it caligraphic_hard caligraphic_to caligraphic_update caligraphic_the caligraphic_model caligraphic_for caligraphic_edges caligraphic_every caligraphic_moment caligraphic_especially caligraphic_at caligraphic_peak caligraphic_times. caligraphic_(3) caligraphic_Sometimes, caligraphic_the caligraphic_distributed caligraphic_nature caligraphic_of caligraphic_clouds caligraphic_today caligraphic_may caligraphic_can caligraphic_afford caligraphic_the caligraphic_computational caligraphic_volume, caligraphic_since caligraphic_it caligraphic_can caligraphic_call caligraphic_enough caligraphic_servers caligraphic_to caligraphic_support caligraphic_edge-cloud caligraphic_collaboration. caligraphic_However, caligraphic_the caligraphic_huge caligraphic_resource caligraphic_consumption caligraphic_is caligraphic_impractical caligraphic_in caligraphic_real-scenario. caligraphic_Besides, caligraphic_according caligraphic_to caligraphic_our caligraphic_empirical caligraphic_study, caligraphic_our caligraphic_IntellectReq caligraphic_can caligraphic_bring caligraphic_21.4% caligraphic_resource caligraphic_saving caligraphic_when caligraphic_the caligraphic_performance caligraphic_is caligraphic_the caligraphic_same caligraphic_using caligraphic_the caligraphic_APG caligraphic_framework. caligraphic_Under caligraphic_the caligraphic_DUET caligraphic_framework, caligraphic_IntellectReq caligraphic_can caligraphic_bring caligraphic_16.6% caligraphic_resource caligraphic_saving caligraphic_when caligraphic_the caligraphic_performance caligraphic_is caligraphic_the caligraphic_same. caligraphic_Summing caligraphic_up, caligraphic_IntellectReq caligraphic_can caligraphic_save caligraphic_19% caligraphic_resources caligraphic_on caligraphic_average, caligraphic_which caligraphic_is caligraphic_very caligraphic_helpful caligraphic_for caligraphic_cost caligraphic_control caligraphic_and caligraphic_can caligraphic_facilitate caligraphic_the caligraphic_EC-CDR caligraphic_development caligraphic_in caligraphic_practice. caligraphic_The caligraphic_following caligraphic_Table caligraphic_is caligraphic_the caligraphic_comparison caligraphic_between caligraphic_our caligraphic_method caligraphic_IntellectReq caligraphic_and caligraphic_EC-CDR caligraphic_in caligraphic_the caligraphic_amount caligraphic_of caligraphic_transmitted caligraphic_data caligraphic_and caligraphic_the caligraphic_computing caligraphic_power caligraphic_consumed caligraphic_on caligraphic_the caligraphic_cloud. caligraphic_(4) caligraphic_During caligraphic_the caligraphic_peak caligraphic_period, caligraphic_resources caligraphic_will caligraphic_be caligraphic_tight caligraphic_and caligraphic_cause caligraphic_freezes caligraphic_or caligraphic_even caligraphic_crashes. caligraphic_This caligraphic_is caligraphic_still caligraphic_in caligraphic_the caligraphic_case caligraphic_that caligraphic_EC-CDR caligraphic_has caligraphic_not caligraphic_been caligraphic_deployed caligraphic_yet, caligraphic_that caligraphic_is, caligraphic_the caligraphic_edge-cloud caligraphic_communication caligraphic_only caligraphic_performs caligraphic_the caligraphic_most caligraphic_basic caligraphic_user caligraphic_data caligraphic_transmission. caligraphic_Then, caligraphic_IntellectReq caligraphic_can caligraphic_achieve caligraphic_better caligraphic_performance caligraphic_than caligraphic_EC-CDR caligraphic_under caligraphic_any caligraphic_resource caligraphic_limit caligraphic_ϵ, caligraphic_or caligraphic_to caligraphic_achieve caligraphic_the caligraphic_performance caligraphic_that caligraphic_EC-CDR caligraphic_requires caligraphic_ϵ+19% caligraphic_of caligraphic_resources caligraphic_to caligraphic_achieve.