{"id":5064,"date":"2021-09-24T15:41:12","date_gmt":"2021-09-24T15:41:12","guid":{"rendered":"https:\/\/www.dinu.at\/profile\/home\/?p=5064"},"modified":"2022-08-13T10:58:05","modified_gmt":"2022-08-13T10:58:05","slug":"the-balancing-principle-for-parameter-choice-in-distance-regularized-domain-adaptation","status":"publish","type":"post","link":"https:\/\/www.dinu.at\/profile\/home\/the-balancing-principle-for-parameter-choice-in-distance-regularized-domain-adaptation\/","title":{"rendered":"The balancing principle for parameter choice in distance-regularized domain adaptation"},"content":{"rendered":"\n<h2>Abstract<\/h2>\n\n\n\n<p>We address the unsolved algorithm design problem of choosing a justified regularization parameter in unsupervised domain adaptation. This problem is intriguing as no labels are available in the target domain. Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems. Regularization parameters in inverse problems are optimally chosen by the fundamental principle of balancing approximation and sampling errors. We use this principle to balance learning errors and domain distance in a target error bound. As a result, we obtain a theoretically justified rule for the choice of the regularization parameter. In contrast to the state of the art, our approach allows source and target distributions with disjoint supports. An empirical comparative study on benchmark datasets underpins the performance of our approach.<\/p>\n\n\n<div id=\"themify_builder_content-5064\" data-postid=\"5064\" class=\"themify_builder_content themify_builder_content-5064 themify_builder\">\n\n    <\/div>\n","protected":false},"excerpt":{"rendered":"<p>Abstract We address the unsolved algorithm design problem of choosing a justified regularization parameter in unsupervised domain adaptation. This problem is intriguing as no labels are available in the target domain. Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true},"categories":[1],"tags":[],"jetpack_featured_media_url":"","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p7SrVj-1jG","jetpack-related-posts":[{"id":5080,"url":"https:\/\/www.dinu.at\/profile\/home\/ensemble-learning-for-domain-adaptation-by-importance-weighted-least-squares\/","url_meta":{"origin":5064,"position":0},"title":"Addressing Parameter Choice Issues in Unsupervised Domain Adaptation by Aggregation","date":"13. August 2022","format":false,"excerpt":"Abstract We study the problem of choosing algorithm hyper-parameters in unsupervised domain adaptation, i.e., with labeled data in a source domain and unlabeled data in a target domain, drawn from a different input distribution. We follow the strategy to compute several models using different hyper-parameters, and, to subsequently compute a\u2026","rel":"","context":"In &quot;General&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5072,"url":"https:\/\/www.dinu.at\/profile\/home\/reactive-exploration-to-cope-with-non-stationarity-in-lifelong-reinforcement-learning\/","url_meta":{"origin":5064,"position":1},"title":"Reactive Exploration to Cope with Non-Stationarity in Lifelong Reinforcement Learning","date":"1. August 2022","format":false,"excerpt":"Abstract In lifelong learning, an agent learns throughout its entire life without resets, in a constantly changing environment, as we humans do. Consequently, lifelong learning comes with a plethora of research problems such as continual domain shifts, which result in non-stationary rewards and environment dynamics. These non-stationarities are difficult to\u2026","rel":"","context":"In &quot;General&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5142,"url":"https:\/\/www.dinu.at\/profile\/home\/a-neuro-symbolic-perspective-on-large-language-models-llms\/","url_meta":{"origin":5064,"position":2},"title":"A Neuro-Symbolic Perspective on Large Language Models (LLMs)","date":"22. January 2023","format":false,"excerpt":"We are excited to present our work, combining the power of a symbolic approach and Large Language Models (LLMs). Our Symbolic API bridges the gap between classical programming (Software 1.0) and differentiable programming (Software 2.0). Conceptually, our framework uses neural networks - specifically LLMs - at its core, and composes\u2026","rel":"","context":"In &quot;General&quot;","img":{"alt_text":"","src":"https:\/\/i0.wp.com\/www.dinu.at\/wp-content\/uploads\/2023\/01\/symai_logo.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":4672,"url":"https:\/\/www.dinu.at\/profile\/home\/overcoming-catastrophic-forgetting-with-context-dependent-activations-xda-and-synaptic-stabilization\/","url_meta":{"origin":5064,"position":3},"title":"Overcoming Catastrophic Forgetting with Context-Dependent Activations (XdA) and Synaptic Stabilization","date":"25. November 2019","format":false,"excerpt":"Abstract Overcoming Catastrophic Forgetting in neural networks is crucial to solving continuous learning problems. Deep Reinforcement Learning uses neural networks to make predictions of actions according to the current state space of an environment. In a dynamic environment, robust and adaptive life-long learning algorithms mark the cornerstone of their success.\u2026","rel":"","context":"In &quot;General&quot;","img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"builder_content":"","_links":{"self":[{"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/posts\/5064"}],"collection":[{"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/comments?post=5064"}],"version-history":[{"count":19,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/posts\/5064\/revisions"}],"predecessor-version":[{"id":5116,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/posts\/5064\/revisions\/5116"}],"wp:attachment":[{"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/media?parent=5064"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/categories?post=5064"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dinu.at\/profile\/home\/wp-json\/wp\/v2\/tags?post=5064"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}