top of page

Home

​

To All My Dear Visitors,

​

Greetings and welcome to my personal website. I am Van-Hai Bui, from Vietnam. Currently, I am an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Michigan-Dearborn, Michigan, USA. My research topics encompass energy management systems, the application of AI/ML in smart grids, and the operation and control of microgrids. For more details about me, please feel free to reach out to me directly via email at vhbui@umich.edu.

​

​

  • Guest Editor: 

  1. Energies (Title: Planning and Operation of Microgrids)

  2. Future Internet (Title: Multi-Agent Deep Reinforcement Learning for Distributed Operation and Control of     Microgrids)

  3. Algorithms (Title: Reinforcement Learning and Its Applications in Modern Power and Energy Systems)

  4. Frontiers in Smart Grids (Title: Machine Learning Solutions for Renewable Energy Integration in Power Systems)

    

Research Areas:

 

1. Optimal Operation of Microgrids (Smart Grids) 

Van-Hai Bui

The figure above illustrates the concept of a smart grid, wherein microturbines and fuel cells provide power to loads, and their output power is regulated by the Market Operator based on available renewable power. This grid can operate independently without a connection to a substation (in islanded mode) or can be connected to a substation in the grid-connected mode. In a smart grid, energy storage systems play a pivotal role in maintaining system stability and minimizing overall operational costs. The primary challenges in smart grids involve determining the optimal sizing of distributed energy sources and energy storage capacities, as well as fostering cooperation among power sources to meet load demand while maximizing income in both grid-connected and islanded modes.

​

​

 

2. Energy Management Systems (EMSs)

Van-Hai Bui

An Energy Management System (EMS) is a comprehensive set of computer-aided tools employed by operators overseeing electric utility grids. Its primary purpose is to oversee, regulate, and enhance the performance of both the generation and transmission systems. These monitoring and control functions fall under the umbrella of Supervisory Control and Data Acquisition (SCADA), while the optimization tools are commonly referred to as "advanced applications."

This combination of computer technologies is alternatively known as SCADA/EMS or EMS/SCADA. In these contexts, the term "EMS" pertains specifically to the suite of power network applications and encompasses functions related to generation control and scheduling, excluding the monitoring and control functions.

Moreover, EMS is frequently employed by individual commercial entities to oversee, measure, and manage their electrical loads within buildings. It enables centralized control of devices such as HVAC units and lighting systems across various locations, including retail outlets, grocery stores, and restaurants. Energy management systems also provide metering, submetering, and monitoring capabilities, empowering facility and building managers with data and insights to make informed decisions about energy-related activities throughout their sites.

3. Reinforcement Learning

Reinforcement learning (RL) is a sub-area of machine learning. In RL-based distributed operation, an agent will learn to take suitable actions to maximize a numerical reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. The difference between RL and supervised learning is the data set that an agent uses during the learning process. In supervised learning, the training data has the answer key, so the model is trained with the correct answers. In RL, there is no answer (training data) and the learning agent makes its own decisions and takes actions to achieve a specific goal. Due to the absence of training data set, the agent learns from its experience and improves its actions after every learning step. The above figure shows the basic principle diagram for RL method. RL can be simply understood using the concepts of agents, environments, states, actions, and rewards, all of which are explained below.

      Learning agent: A learning agent is an object introduced with specific goals. The agent takes actions and learns from its experiences. The agent always tries to take better actions to receive higher rewards. Thereby, the agent can maximize the cumulative rewards after a long run.

      Action (A): A is the set of all possible actions. The agent chooses an action (a) among the list of possible actions (A) every step until reaching the terminal states.

      Environment: The environment represents the world in which the agent can interact during the learning period. The environment takes the current state of agent and action as input and returns a reinforcement signal (reward/penalty) and its next state as output.

       State (S): A state describes a particular situation in which the agent finds itself, e.g. a specific place and moment, or the current situation returned by the environment, etc.

      Reward (R): A reward is a response from the environment by which we measure the good or bad of the agent’s actions. From any given state, the agent carries out an action to the environment, and the environment returns a reward and a new state. The reward is used to evaluate the agent’s action.

bottom of page