Refine
Year of publication
- 2020 (29) (remove)
Document Type
- Doctoral Thesis (29) (remove)
Has Fulltext
- yes (29)
Is part of the Bibliography
- no (29)
Keywords
- Eltern (2)
- Energieversorgung (2)
- USA (2)
- Übertritt (2)
- #relichat (1)
- Abbruchmotive (1)
- Achtsamkeit (1)
- Advanced Metering Infrastructure (1)
- Algebraic fault attack (1)
- Algebraic normal form (1)
Institute
The current electricity grid is undergoing major changes. There is increasing pressure to move away from power generation from fossil fuels, both due to ecological concerns and fear of dependencies on scarce natural resources. Increasing the share of decentralized generation from renewable sources is a widely accepted way to a more sustainable power infrastructure. However, this comes at the price of new challenges: generation from solar or wind power is not controllable and only forecastable with limited accuracy. To compensate for the increasing volatility in power generation, exerting control on the demand side is a promising approach. By providing flexibility on demand side, imbalances between power generation and demand may be mitigated.
This work is concerned with developing methods to provide grid support on demand side while limiting the associated costs. This is done in four major steps: first, the target power curve to follow is derived taking both goals of a grid authority and costs of the respective load into account. In the following, the special case of data centers as an instance of significant loads inside a power grid are focused on more closely. Data center services are adapted in a way such as to achieve the previously derived power curve. By means of hardware power demand models, the required adaptation of hardware utilization can be derived. The possibilities of adapting software services are investigated for the special use case of live video encoding. A method to minimize quality of experience loss while reducing power demand is presented. Finally, the possibility of applying probabilistic model checking to a continuous demand-response scenario is demonstrated.
Fundamental changes in business-to-business (B2B) buying behavior confront B2B supplier firms with unprecedented challenges. On the one hand, a rising share of industrial buyers demands digitalized offerings and processes from suppliers. Consequently, suppliers are urged to implement digital transformations by expanding the range of both digital offerings and processes. On the other hand, B2B buyers increasingly expect suppliers to provide individually tailored solutions to their idiosyncratic needs. Hence, suppliers are also required to implement non-digital transformations by providing offerings and processes that are customized to each customers’ specific requirements.
The rise of these digital and non-digital transformations calls established knowledge into question. Thus, B2B marketing research and practice are urged to create a comprehensive understanding of digital and non-digital transformations by means of novel and empirically grounded insights and derive actionable response strategies. In respond, my dissertation addresses the overall research question of how B2B supplier firms can successfully implement both digital and non-digital transformations in three individual essays.
In Essay 1, I offer a broader perspective on both digital and non-digital transformations by investigating digital service customization (i.e., the tailoring of digital B2B services to customers’ individual needs). Through a systematic literature review and bibliometric analysis, I outline a comprehensive set of factors that favor the application of distinct digital service customization strategies. Essay 2 represents a deep dive into digital transformations of sales processes. By making use of two rich sets of qualitative interview material from supplier and buyer firms, I identify the challenges resulting for B2B salespeople from the introduction of digital sales channels into personal selling. Moreover, I uncover facilitating mechanisms that sales managers can employ to support salespeople in coping with digital sales channels. Finally, Essay 3 constitutes a deep dive into non-digital transformations. Based on qualitative interview material and survey data from matched sales manager–salesperson dyads, the essay explores how configurations of individual salespeople’s personal and procedural competencies facilitate success at selling customer solutions (i.e., highly customized, performance-oriented offerings comprising products and/or services). The essay shows that successfully selling customized offerings like solutions hinges on salespeople’s unique configurations of present and absent competencies.
In a nutshell, these essays provide three major insights on how B2B suppliers can successfully implement digital and non-digital transformations. First, they underscore that a comprehensive understanding of the origins and spillover effects of transformations is a key prerequisite to successfully implementing them. Second, they unveil that digital and non-digital transformations impact on multiple organizational levels. Third, they point out important resources and capabilities that help suppliers to successfully implement transformations, be they digital or non-digital.
With this dissertation, I make substantial contributions to the broader literature on digital and non-digital transformations in B2B contexts. At the same time, my dissertation provides hands-on implications for managers in B2B supplier firms that are facing fundamental transformations in the marketplace—both digital and non-digital in nature.
Ziel der Arbeit ist es, Narrative und diskursive Strategien von politisch rechten Akteuren zu erforschen und aus einer pädagogischen Sicht zu bewerten. Der Fokus der Untersuchung wird hierbei auf Kommunikation innerhalb des Social Web gelegt. Zur Bewältigung des Forschungsvorhabens wird als Methode eine Diskursanalyse angewendet. Die Analyse kommt zu dem Ergebnis, dass sowohl die erzeugten Narrative als auch die verwendeten diskursiven Strategien in allen untersuchten diskursiven Arenen starke Ähnlichkeiten aufweisen – unabhängig davon, ob die Arena vorrangig rechtsextreme, rechtsradikale oder rechtspopulistische Tendenzen besitzt. Dies legt den Verdacht nahe, dass ein Großteil der politisch rechte Akteure immer wiederkehrende Narrative verbreiten und diese mit den stets gleichen diskursiven Strategien untermauern, eine klare diskursive Grenze zwischen Rechtsextremismus, Rechtsradikalismus und Rechtspopulismus scheint es nicht zu geben. Dadurch besteht die Gefahr, dass Rechtsextreme Gedanken zunehmend in die Mitte der Gesellschaft vordringen können.
With the frequency and impact of data breaches raising, it has become essential for organizations to automate intrusion detection via machine learning solutions. This generally comes with numerous challenges, among others high class imbalance, changing target concepts and difficulties to conduct sound evaluation. In this thesis, we adopt a user-centered anomaly detection perspective to address selected challenges of intrusion detection, through a real-world use case in the identity and access management (IAM) domain. In addition to the previous challenges, salient properties of this particular problem are high relevance of categorical data, limited feature availability and total absence of ground truth.
First, we ask how to apply anomaly detection to IAM audit logs containing a restricted set of mixed (i.e. numeric and categorical) attributes. Then, we inquire how anomalous user behavior can be separated from normality, and this separation evaluated without ground truth. Finally, we examine how the lack of audit data can be alleviated in two complementary settings. On the one hand, we ask how to cope with users without relevant activity history ("cold start" problem). On the other hand, we seek how to extend audit data collection with heterogeneous attributes (i.e. categorical, graph and text) to improve insider threat detection.
After aggregating IAM audit data into sessions, we introduce and compare general anomaly detection methods for mixed data to a user identification approach, designed to learn the distinction between normal and malicious user behavior. We find that user identification outperforms general anomaly detection and is effective against masquerades. An additional clustering step allows to reduce false positives among similar users. However, user identification is not effective against insider threats. Furthermore, results suggest that the current scope of our audit data collection should be extended.
In order to tackle the "cold start" problem, we adopt a zero-shot learning approach. Focusing on the CERT insider threat use case, we extend an intrusion detection system by integrating user relations to organizational entities (like assignments to projects or teams) in order to better estimate user behavior and improve intrusion detection performance. Results show that this approach is effective in two realistic scenarios.
Finally, to support additional sources of audit data for insider threat detection, we propose a method representing audit events as graph edges with heterogeneous attributes. By performing detection at fine-grained level, this approach advantageously improves anomaly traceability while reducing the need for aggregation and feature engineering. Our results show that this method is effective to find intrusions in authentication and email logs.
Overall, our work suggests that masquerades and insider threats call for different detection methods. For masquerades, user identification is a promising approach. To find malicious insiders, graph features representing user context and relations to other entities can be informative. This opens the door for tighter coupling of intrusion detection with user identities, roles and privileges used in IAM solutions.
New arising phenomena in the occupational realm strongly shape contemporary work settings. These developments heavily affect how individuals work within and beyond organizational boundaries. Two phenomena associated with the changing nature of work have been especially prevalent in work settings and intensively discussed in public debates. First, organizations started to introduce mindfulness practices to their workforce. Rooted in spirituality and formerly used in clinical therapy, mindfulness is applied as a human resource development practice to train employees and managers to cope with the increased work intensification. Second, digitization and the importance of individualization opened up the path for work settings beyond organizational boundaries on crowdworking online platforms. On these online platforms, workers process tasks independently and remotely. Research just started to address the implications and meaning of mindfulness practices in organizations and the rise of crowdworking platforms. Several questions remain unanswered. This dissertation addresses unanswered but pressing questions related to these two phenomena shaping contemporary work settings. Structured in four essays the first two essays address the application and meaning of mindfulness practices. The first essay analyzes the meaning and interpretations of these new practices within organizations. The second essay takes contextual factors of the organizational environment into account and investigates their relevance for the successful implementation of mindfulness practices. The second two essays are dedicated to work attitudes and behavior on crowdworking online platform. Essay three captures individuals’ motivation for working on such platforms and their effects for workers’ work performance. The last essay deals with the role of professional crowdworking online communities in the work experience and asses the effects of social support in these communities on occupational identification, work meaningfulness and finally on work engagement. Each essay in this dissertation generates new insights on arising phenomena in contemporary work settings. They address several timely yet unanswered research questions for these rising phenomena and thereby offer a deeper and more nuanced understanding of the role mindfulness practices and crowdworking online platforms play in the context of the future of work.
Die vergleichende Sportpädagogik als Teildisziplin und zugleich Schnittmenge der vergleichenden Erziehungswissenschaft und der allgemeinen Sportpädagogik macht es sich zur Aufgabe, sportbezogene Charakteristika zweier oder mehrerer Länder oder Kulturen miteinander zu vergleichen und den dadurch entstandenen Erkenntnisgewinn in verschiedensten Sphären nutzbar zu machen – eine Wissenschaftsdisziplin, wie auch Forschungsmethode, deren Möglichkeiten nicht nur für den Sportunterricht per se, sondern auch für die (universitäre) Ausbildung der zukünftigen Arrangeure des Sportunterrichts nutzbar gemacht werden können. Im Rahmen eines melioristisch motivierten Vergleichs macht es sich diese Forschungsarbeit zum Ziel, Unterschiede und Gemeinsamkeiten der universitären Sportlehramtsausbildung in Deutschland und den USA zu eruieren, deren Ursachen zu analysieren und auf dieser Basis Potenziale, Entwicklungsperspektiven und Handlungsoptionen für den positiven Fortschritt beider universitärer Ausbildungssysteme aufzuzeigen. Dabei werden sowohl außensystemische, wie auch innensystemische Untersuchungsaspekte berücksichtigt.
Subjektive Theorien von Grundschullehrkräften über Eltern wurden qualitativ-empirisch erkundet, um einen Beitrag zur Schule-Elternhauskooperation zu leisten. Luhmanns Systemtheorie und Fends Schultheorie bilden die theoretische Basis. Mithilfe von drei Interaktionsfeldern, die entscheidende Begegnungsräume zwischen Grundschullehrkräften und Elternschaft markieren, wurde dem Verhältnis zueinander nachgegangen (Luhmann, 2014; Fend, 2008). Mittels eines halbstandardisierten Leitfadeninterviews und einer adaptierten Struktur-Legetechnik sind die Subjektiven Theorien der Untersuchungsteilnehmer*innen erfasst worden (Scheele & Groeben, 1988). Aus der Perspektive von Lehrkräften wurden die individuellen und generellen subjektiven Theorien herausgearbeitet. Die Ergebnisse offenbaren fachlich fehlerhafte und für Kooperation teilweise äußerst hemmende Überzeugungen, obgleich der Wunsch nach einem kooperativen Miteinander stets betont wurde.
The current movement towards a smart grid serves as a solution to present power grid challenges by introducing numerous monitoring and communication technologies. A dependable, yet timely exchange of data is on the one hand an existential prerequisite to enable Advanced Metering Infrastructure (AMI) services, yet on the other a challenging endeavor, because the increasing complexity of the grid fostered by the combination of Information and Communications Technology (ICT) and utility networks inherently leads to dependability challenges.
To be able to counter this dependability degradation, current approaches based on high-reliability hardware or physical redundancy are no longer feasible, as they lead to increased hardware costs or maintenance, if not both. The flexibility of these approaches regarding vendor and regulatory interoperability is also limited. However, a suitable solution to the AMI dependability challenges is also required to maintain certain regulatory-set performance and Quality of Service (QoS) levels.
While a part of the challenge is the introduction of ICT into the power grid, it also serves as part of the solution. In this thesis a Network Functions Virtualization (NFV) based approach is proposed, which employs virtualized ICT components serving as a replacement for physical devices. By using virtualization techniques, it is possible to enhance the performability in contrast to hardware based solutions through the usage of virtual replacements of processes that would otherwise require dedicated hardware. This approach offers higher flexibility compared to hardware redundancy, as a broad variety of virtual components can be spawned, adapted and replaced in a short time. Also, as no additional hardware is necessary, the incurred costs decrease significantly. In addition to that, most of the virtualized components are deployed on Commercial-Off-The-Shelf (COTS) hardware solutions, further increasing the monetary benefit.
The approach is developed by first reviewing currently suggested solutions for AMIs and related services. Using this information, virtualization technologies are investigated for their performance influences, before a virtualized service infrastructure is devised, which replaces selected components by virtualized counterparts. Next, a novel model, which allows the separation of services and hosting substrates is developed, allowing the introduction of virtualization technologies to abstract from the underlying architecture. Third, the performability as well as monetary savings are investigated by evaluating the developed approach in several scenarios using analytical and simulative model analysis as well as proof-of-concept approaches. Last, the practical applicability and possible regulatory challenges of the approach are identified and discussed.
Results confirm that—under certain assumptions—the developed virtualized AMI is superior to the currently suggested architecture. The availability of services can be severely increased and network delays can be minimized through centralized hosting. The availability can be increased from 96.82% to 98.66% in the given scenarios, while decreasing the costs by over 60% in comparison to the currently suggested AMI architecture. Lastly, the performability analysis of a virtualized service prototype employing performance analysis and a Musa-Okumoto approach reveals that the AMI requirements are fulfilled.
Computer vision aims at developing algorithms to extract high-level information from images and videos. In the industry, for instance, such algorithms are applied to guide manufacturing robots, to visually monitor plants, or to assist human operators in recognizing specific components. Recent progress in computer vision has been dominated by deep artificial neural network, i.e., machine learning methods simulating the way that information flows in our biological brains, and the way that our neural networks adapt and learn from experience. For these methods to learn how to accurately perform complex visual tasks, large amounts of annotated images are needed. Collecting and labeling such domain-relevant training datasets is, however, a tedious—sometimes impossible—task. Therefore, it has become common practice to leverage pre-available three-dimensional (3D) models instead, to generate synthetic images for the recognition algorithms to be trained on. However, methods optimized over synthetic data usually suffer a significant performance drop when applied to real target images. This is due to the realism gap, i.e., the discrepancies between synthetic and real images (in terms of noise, clutter, etc.). In my work, three main directions were explored to bridge this gap.
First, an innovative end-to-end framework is proposed to render realistic depth images from 3D models, as a growing number of solutions (especially in the industry) are utilizing low-cost depth cameras (e.g., Microsoft Kinect and Intel RealSense) for recognition tasks. Based on a thorough study of these devices and the different types of noise impairing them, the proposed framework simulates their inner mechanisms, comprehensively modeling vital factors such as sensor noise, material reflectance, surface geometry, etc. Able to simulate a wide panel of depth sensors and to quickly generate large datasets, this framework is used to train algorithms for various recognition tasks, consistently and significantly enhancing their performance compared to other state-of-the-art simulation tools.
In some cases, however, relevant 2D or 3D object representations to generate synthetic samples are not available. Considering this different case of data scarcity, a solution is then proposed to incrementally build a representation of visual scenes from partial observations. Provided observations are localized from one to another based on their content and registered in a global memory with spatial properties. Simultaneously, this memory can be queried to render novel views of the scene. Furthermore, unobserved regions can be hallucinated in memory, in consistence with previous observations, hallucinations, and global priors. The efficacy of the proposed mnemonic and generative system, trainable end-to-end, is demonstrated on various 2D and 3D use-cases.
Finally, an advanced convolutional neural network pipeline is introduced, tackling the realism gap from a novel angle. While most methods addressing this problem focus on bringing synthetic samples—or the knowledge acquired from them—closer to the real target domain, the proposed solution performs the opposite process, mapping unseen target images into controlled synthetic domains. The pre-processed samples can then be handed to downstream recognition methods, themselves purely trained on similar synthetic data, to greatly improve their accuracy.
For each approach, a variety of qualitative and quantitative studies are detailed, providing successful comparisons to state-of-the-art methods. By proposing solutions to bridge the realism gap from either side, as well as a pipeline to improve the acquisition and generation of new visual content, this thesis provides a unique perspective on the challenges of data scarcity when building robust recognition systems.