Blog

We collect the key news feed from free RSS services,  the news is updated every 3 hours, 24/7.

Artificial Intelligence

MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed

MIT News - Artificial intelligence When the Takeda Pharmaceutical Co. and the MIT School of Engineering launched their collaboration focused on artificial intelligence in health care and drug development in February 2020, society was on the cusp of a globe-altering pandemic and AI was far from the buzzword it is today.As the program concludes, the world looks very different. AI has become a transformative technology across industries including health care and pharmaceuticals, while the pandemic has altered the way many businesses approach health care and changed how they develop and sell medicines.For both MIT and Takeda, the program has been a game-changer.When it launched, the collaborators hoped the program would help solve tangible, real-world problems. By its end, the program has yielded a catalog of new research papers, discoveries, and lessons learned, including a patent for a system that could improve the manufacturing of small-molecule medicines.Ultimately, the program allowed both entities to create a foundation for a world where AI and machine learning play a pivotal role in medicine, leveraging Takeda’s expertise in biopharmaceuticals and the MIT researchers’ deep understanding of AI and machine learning.“The MIT-Takeda Program has been tremendously impactful and is a shining example of what can be accomplished when experts in industry and academia work together to develop solutions,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer, dean of the School of Engineering, and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “In addition to resulting in research that has advanced how we use AI and machine learning in health care, the program has opened up new opportunities for MIT faculty and students through fellowships, funding, and networking.”What made the program unique was that it was centered around several concrete challenges spanning drug development that Takeda needed help addressing. MIT faculty had the opportunity to select the projects based on...
Read More
Artificial Intelligence

Improving air quality with generative AI

AWS Machine Learning Blog As of this writing, Ghana ranks as the 27th most polluted country in the world, facing significant challenges due to air pollution. Recognizing the crucial role of air quality monitoring, many African countries, including Ghana, are adopting low-cost air quality sensors. The Sensor Evaluation and Training Centre for West Africa (Afri-SET), aims to use technology to address these challenges. Afri-SET engages with air quality sensor manufacturers, providing crucial evaluations tailored to the African context. Through evaluations of sensors and informed decision-making support, Afri-SET empowers governments and civil society for effective air quality management. On December 6th-8th 2023, the non-profit organization, Tech to the Rescue, in collaboration with AWS, organized the world’s largest Air Quality Hackathon – aimed at tackling one of the world’s most pressing health and environmental challenges, air pollution. More than 170 tech teams used the latest cloud, machine learning and artificial intelligence technologies to build 33 solutions. The solution addressed in this blog solves Afri-SET’s challenge and was ranked as the top 3 winning solutions. This post presents a solution that uses a generative artificial intelligence (AI) to standardize air quality data from low-cost sensors in Africa, specifically addressing the air quality data integration problem of low-cost sensors. The solution harnesses the capabilities of generative AI, specifically Large Language Models (LLMs), to address the challenges posed by diverse sensor data and automatically generate Python functions based on various data formats. The fundamental objective is to build a manufacturer-agnostic database, leveraging generative AI’s ability to standardize sensor outputs, synchronize data, and facilitate precise corrections. Current challenges Afri-SET currently merges data from numerous sources, employing a bespoke approach for each of the sensor manufacturers. This manual synchronization process, hindered by disparate data formats, is resource-intensive, limiting the potential for widespread data orchestration. The platform, although...
Read More
Artificial Intelligence

Use zero-shot large language models on Amazon Bedrock for custom named entity recognition

AWS Machine Learning Blog Name entity recognition (NER) is the process of extracting information of interest, called entities, from structured or unstructured text. Manually identifying all mentions of specific types of information in documents is extremely time-consuming and labor-intensive. Some examples include extracting players and positions in an NFL game summary, products mentioned in an AWS keynote transcript, or key names from an article on a favorite tech company. This process must be repeated for every new document and entity type, making it impractical for processing large volumes of documents at scale. With more access to vast amounts of reports, books, articles, journals, and research papers than ever before, swiftly identifying desired information in large bodies of text is becoming invaluable. Traditional neural network models like RNNs and LSTMs and more modern transformer-based models like BERT for NER require costly fine-tuning on labeled data for every custom entity type. This makes adopting and scaling these approaches burdensome for many applications. However, new capabilities of large language models (LLMs) enable high-accuracy NER across diverse entity types without the need for entity-specific fine-tuning. By using the model’s broad linguistic understanding, you can perform NER on the fly for any specified entity type. This capability is called zero-shot NER and enables the rapid deployment of NER across documents and many other use cases. This ability to extract specified entity mentions without costly tuning unlocks scalable entity extraction and downstream document understanding. In this post, we cover the end-to-end process of using LLMs on Amazon Bedrock for the NER use case. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set...
Read More
Artificial Intelligence

Streamline financial workflows with generative AI for email automation

AWS Machine Learning Blog Many companies across all industries still rely on laborious, error-prone, manual procedures to handle documents, especially those that are sent to them by email. Despite the availability of technology that can digitize and automate document workflows through intelligent automation, businesses still mostly rely on labor-intensive manual document processing. This represents a major opportunity for businesses to optimize this workflow, save time and money, and improve accuracy by modernizing antiquated manual document handling with intelligent document processing (IDP) on AWS. To extract key information from high volumes of documents from emails and various sources, companies need comprehensive automation capable of ingesting emails, file uploads, and system integrations for seamless processing and analysis. Intelligent automation presents a chance to revolutionize document workflows across sectors through digitization and process optimization. This post explains a generative artificial intelligence (AI) technique to extract insights from business emails and attachments. It examines how AI can optimize financial workflow processes by automatically summarizing documents, extracting data, and categorizing information from email attachments. This enables companies to serve more clients, direct employees to higher-value tasks, speed up processes, lower expenses, enhance data accuracy, and increase efficiency. Challenges with manual data extraction The majority of business sectors are currently having difficulties with manual document processing, and are reading emails and their attachments without the use of an automated system. These procedures cost money, take a long time, and are prone to mistakes. Manual procedures struggle to keep up with the number of documents. Finding relevant information that is necessary for business decisions is difficult. Therefore, there is a demand for shorter decision cycles and speedier document processing. The aim of this post is to help companies that process documents manually to speed up the delivery of data derived from those documents for use in business...
Read More
Covid-19

Global failure to prepare for pandemics ‘gambling with children’s future’

Coronavirus | The Guardian Lessons from Ebola and Covid were not learned, say Helen Clark and Ellen Johnson Sirleaf as they launch report calling for urgent actionWorld leaders are “gambling with their children’s and grandchildren’s health and wellbeing” by failing to prepare for a future pandemic, a new report warns.Amid surging cases of H5N1 bird flu in mammals, and an mpox outbreak in central Africa, two senior stateswomen have said the lack of preparation had left the world vulnerable to “devastation”. Continue reading... Go to Source 18/06/2024 - 15:43 /Kat Lay, Global health correspondent Twitter: @hoffeldtcom
Read More
Business News

Singtel-KKR consortium to invest $1.75 billion in data centre provider ST Telemedia GDC

The Straits Times Business News The company's regional data centre business, Nxera, will also be partnering Malaysia telco TM. Go to Source 18/06/2024 - 13:39 / Twitter: @hoffeldtcom
Read More
Management

In Her Own Words: Tansy McNulty aims to end police violence

Human Resources News - Human Resources News Headlines | Bizjournals.com Career decisions often mark time as well as place — current events as well as personal ones. Tansy McNulty’s experience in supply chain management strengthened her role as an advocate, but the deaths of three men she didn’t know inspired her move from corporate to community. In the Summer of 2016, while on a babymoon with my husband, we disconnected from the world. We returned to news of the murders of Alton Sterling, Philando Castile, and Ronnie Shumpert. It shook my world as I was carrying… Go to Source 18/06/2024 - 12:23 /Ellen Sherberg Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Researchers leverage shadows to model 3D scenes, including objects blocked from view

MIT News - Artificial intelligence Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic” in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.“Our key idea was taking these two things that have been done in different disciplines before and pulling them together — multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to...
Read More
Business News

Low investment blocking UK growth, says think tank

BBC News Both Conservative and Labour plan to reduce government investment over the next parliamentary term Go to Source 18/06/2024 - 06:23 / Twitter: @hoffeldtcom
Read More
Business News

Singapore’s key exports dip 0.1% in May, mildest decline in 20 months

The Straits Times Business News Electronic exports posted the first double-digit growth in 22 months. Go to Source 18/06/2024 - 03:28 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Understanding the visual knowledge of language models

MIT News - Artificial intelligence You’ve likely heard that a picture is worth a thousand words, but can a large language model (LLM) get the picture if it’s never seen images before?As it turns out, language models that are trained purely on text have a solid understanding of the visual world. They can write image-rendering code to generate complex scenes with intriguing objects and compositions — and even when that knowledge is not used properly, LLMs can refine their images. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) observed this when prompting language models to self-correct their code for different images, where the systems improved on their simple clipart drawings with each query.The visual knowledge of these language models is gained from how concepts like shapes and colors are described across the internet, whether in language or code. When given a direction like “draw a parrot in the jungle,” users jog the LLM to consider what it’s read in descriptions before. To assess how much visual knowledge LLMs have, the CSAIL team constructed a “vision checkup” for LLMs: using their “Visual Aptitude Dataset,” they tested the models’ abilities to draw, recognize, and self-correct these concepts. Collecting each final draft of these illustrations, the researchers trained a computer vision system that identifies the content of real photos.“We essentially train a vision system without directly using any visual data,” says Tamar Rott Shaham, co-lead author of the study and an MIT electrical engineering and computer science (EECS) postdoc at CSAIL. “Our team queried language models to write image-rendering codes to generate data for us and then trained the vision system to evaluate natural images. We were inspired by the question of how visual concepts are represented through other mediums, like text. To express their visual knowledge, LLMs can use code as...
Read More
Artificial Intelligence

How Twilio used Amazon SageMaker MLOps pipelines with PrestoDB to enable frequent model retraining and optimized batch transform

AWS Machine Learning Blog This post is co-written with Shamik Ray, Srivyshnav K S, Jagmohan Dhiman and Soumya Kundu from Twilio. Today’s leading companies trust Twilio’s Customer Engagement Platform (CEP) to build direct, personalized relationships with their customers everywhere in the world. Twilio enables companies to use communications and data to add intelligence and security to every step of the customer journey, from sales and marketing to growth and customer service, and many more engagement use cases in a flexible, programmatic way. Across 180 countries, millions of developers and hundreds of thousands of businesses use Twilio to create magical experiences for their customers. Being one of the largest AWS customers, Twilio engages with data and artificial intelligence and machine learning (AI/ML) services to run their daily workloads. This post outlines the steps AWS and Twilio took to migrate Twilio’s existing machine learning operations (MLOps), the implementation of training models, and running batch inferences to Amazon SageMaker. ML models don’t operate in isolation. They must integrate into existing production systems and infrastructure to deliver value. This necessitates considering the entire ML lifecycle during design and development. With the right processes and tools, MLOps enables organizations to reliably and efficiently adopt ML across their teams for their specific use cases. SageMaker includes a suite of features for MLOps that includes Amazon SageMaker Pipelines and Amazon SageMaker Model Registry. Pipelines allow for straightforward creation and management of ML workflows while also offering storage and reuse capabilities for workflow steps. The model registry simplifies model deployment by centralizing model tracking. This post focuses on how to achieve flexibility in using your data source of choice and integrate it seamlessly with Amazon SageMaker Processing jobs. With SageMaker Processing jobs, you can use a simplified, managed experience to run data preprocessing or postprocessing and model evaluation...
Read More
Business News

How immigrants are helping keep job growth hot while inflation cools

US Top News and Analysis Recent spikes in immigration at the southern border and elsewhere in the U.S. have helped to keep the labor pool full, even as job gains kept apace. Go to Source 17/06/2024 - 15:18 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

A smarter way to streamline drug discovery

MIT News - Artificial intelligence The use of AI to streamline drug discovery is exploding. Researchers are deploying machine-learning models to help them identify molecules, among billions of options, that might have the properties they are seeking to develop new medicines.But there are so many variables to consider — from the price of materials to the risk of something going wrong — that even when scientists use AI, weighing the costs of synthesizing the best candidates is no easy task.The myriad challenges involved in identifying the best and most cost-efficient molecules to test is one reason new medicines take so long to develop, as well as a key driver of high prescription drug prices.To help scientists make cost-aware choices, MIT researchers developed an algorithmic framework to automatically identify optimal molecular candidates, which minimizes synthetic cost while maximizing the likelihood candidates have desired properties. The algorithm also identifies the materials and experimental steps needed to synthesize these molecules.Their quantitative framework, known as Synthesis Planning and Rewards-based Route Optimization Workflow (SPARROW), considers the costs of synthesizing a batch of molecules at once, since multiple candidates can often be derived from some of the same chemical compounds.Moreover, this unified approach captures key information on molecular design, property prediction, and synthesis planning from online repositories and widely used AI tools.Beyond helping pharmaceutical companies discover new drugs more efficiently, SPARROW could be used in applications like the invention of new agrichemicals or the discovery of specialized materials for organic electronics.“The selection of compounds is very much an art at the moment — and at times it is a very successful art. But because we have all these other models and predictive tools that give us information on how molecules might perform and how they might be synthesized, we can and should be using that information...
Read More
Covid-19

Boss of US firm given £4bn in UK Covid contracts accused of squandering millions on jets and properties

Coronavirus | The Guardian Rishi Sunak’s team helped fast-track deal with firm founded by Charles Huang, who says contracts generated $2bn profitIn California, state of sunshine and palm trees, a small group of men are locked in a big legal fight over the money made by a US company selling Covid tests to the British government. The founder of Innova Medical Group says his business collected $2bn (£1.6bn) in profits, one of the largest fortunes banked by any medical supplier during the scramble for lifesaving equipment in the early months of the pandemic.In a storm of claims and counter-claims, Innova’s boss, Charles Huang, is accused by former associates of “squandering” or moving $1bn of those profits, spending lavishly on luxury aircraft, an $18m house in Los Angeles and “homes for his mistresses”. Continue reading... Go to Source 17/06/2024 - 13:09 /David Conn and Russell Scott Twitter: @hoffeldtcom
Read More
Management

Longtime talent manager: The secret to having everything is …

Human Resources News - Human Resources News Headlines | Bizjournals.com if you don’t experience some discomfort, you’re probably not going to drive change, says Sharon Randaccio of Performance Management Partners Inc. Go to Source 17/06/2024 - 12:15 /Lian Bunny Twitter: @hoffeldtcom
Read More
Business News

Weekly Money FM Podcasts: Navigating Reit challenges for Mapletree, Changi Business Park

The Straits Times Business News Check out Money FM's best weekly podcasts. Go to Source 17/06/2024 - 00:04 / Twitter: @hoffeldtcom
Read More
Covid-19

UK attractions try to win back visitors as post-Covid ‘revenge spending’ ends

Coronavirus | The Guardian Alton Towers and Legoland owner alters tactics after period of VAT cuts and people spending cash saved during lockdownsThe period of post-Covid “revenge spending” has ended, leaving businesses having to look at different ways to attract customers, the chief operating officer of Merlin Entertainments has said.The term revenge spending was coined to describe how people looked to splash the cash they had saved up during the Covid pandemic on products or experiences that would help make up for time lost to lockdowns. Continue reading... Go to Source 16/06/2024 - 15:00 /Jack Simpson Twitter: @hoffeldtcom
Read More
Covid-19

Anthony Fauci says he turned down pharma jobs while he was Covid chief

Coronavirus | The Guardian Former infectious disease head says big pharma tried to poach him while he was combating coronavirusBefore retiring from his lengthy run as the US government’s top infectious disease doctor, major pharmaceutical companies tried to lure Anthony Fauci away from his post by offering him seven-figure jobs – but he turned them down because he “cared about … the health of the country” too much, he says in a new interview.Fauci’s comments on his loyalty to the National Institute of Allergy and Infectious Diseases (NIAD) – which he directed for 38 years before retiring in December 2022 – come only a couple of weeks after he testified to Congress about receiving “credible death threats” from far-right extremists over his efforts to slow the spread of Covid-19 at the beginning of the pandemic. Continue reading... Go to Source 15/06/2024 - 18:08 /Ramon Antonio Vargas Twitter: @hoffeldtcom
Read More
Covid-19

COVID-19, Ebola, bird flu: What to know about zoonotic diseases

COVID-19 and H5N1 bird flu are both zoonotic, meaning they jumped from animals to humans. How did that happen and how can they infect humans? Go to Source 15/06/2024 - 15:14 /Nathaniel Dove Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Technique improves the reasoning capabilities of large language models

MIT News - Artificial intelligence Large language models like those that power ChatGPT have shown impressive performance on tasks like drafting legal briefs, analyzing the sentiment of customer reviews, or translating documents into different languages.These machine-learning models typically use only natural language to process information and answer queries, which can make it difficult for them to perform tasks that require numerical or symbolic reasoning.For instance, a large language model might be able to memorize and recite a list of recent U.S. presidents and their birthdays, but that same model could fail if asked the question “Which U.S. presidents elected after 1950 were born on a Wednesday?” (The answer is Jimmy Carter.)Researchers from MIT and elsewhere have proposed a new technique that enables large language models to solve natural language, math and data analysis, and symbolic reasoning tasks by generating programs.Their approach, called natural language embedded programs (NLEPs), involves prompting a language model to create and execute a Python program to solve a user’s query, and then output the solution as natural language.They found that NLEPs enabled large language models to achieve higher accuracy on a wide range of reasoning tasks. The approach is also generalizable, which means one NLEP prompt can be reused for multiple tasks.NLEPs also improve transparency, since a user could check the program to see exactly how the model reasoned about the query and fix the program if the model gave a wrong answer.“We want AI to perform complex reasoning in a way that is transparent and trustworthy. There is still a long way to go, but we have shown that combining the capabilities of programming and natural language in large language models is a very good potential first step toward a future where people can fully understand and trust what is going on inside their AI...
Read More
Artificial Intelligence

A creation story told through immersive technology

MIT News - Artificial intelligence In the beginning, as one version of the Haudenosaunee creation story has it, there was only water and sky. According to oral tradition, when the Sky Woman became pregnant, she dropped through a hole in the clouds. While many animals guided her descent as she fell, she eventually found a place on the turtle’s back. They worked together, with the aid of other water creatures, to lift the land from the depths of these primordial waters to create what we now know as our earth.The new immersive experience, “Ne:Kahwistará:ken Kanónhsa’kówa í:se Onkwehonwe,” is a vivid retelling of this creation story by multimedia artist Jackson 2bears, also known as Tékeniyáhsen Ohkwá:ri (Kanien’kehà:ka), the 2022–24 Ida Ely Rubin Artist in Residence at the MIT Center for Art, Science and Technology. “A lot of what drives my work is finding new ways to keep Haudenosaunee teachings and stories alive in our communities, finding new ways to tell them, but also helping with the transmission and transformation of those stories as they are for us, a living part of our cultural practice,” he says. A virtual recreation of the traditional longhouse2bears was first inspired to create a virtual reality version of a longhouse, a traditional Haudenosaunee structure, in collaboration with Thru the RedDoor, an Indigenous-owned media company in Six Nations at the Grand River that 2bears calls home. The longhouse is not only a “functional dwelling,” says 2bears, but an important spiritual and cultural center where creation myths are shared. “While we were developing the project, we were told by one of our knowledge keepers in the community that longhouses aren’t structures, they’re not the materials they’re made out of,” 2bears recalls, “They’re about the people, the Haudenosaunee people. And it’s about our creative cultural practices in that space that make it a sacred place.”The virtual...
Read More
Business News

Starmer banks on economic growth to ‘rebuild Britain’

BBC News Sir Keir Starmer says wealth creation is the top priority of his party's blueprint for government, as he unveils the Labour manifesto. Go to Source 14/06/2024 - 00:59 / Twitter: @hoffeldtcom
Read More
Covid-19

Immunisation rates fall among Australia’s vulnerable as experts blame pandemic misinformation and practical barriers

Coronavirus | The Guardian Below-target levels come after record highs in 2020, with some areas in NSW, Queensland and WA now showing consistently lower vaccination ratesGet our morning and afternoon news emails, free app or daily news podcastImmunisation rates are lagging in Australia’s most vulnerable populations – the very young and old – with experts blaming practical barriers as well as the misinformation and vaccine hesitancy that took off during the Covid-19 pandemic.In 2020 Australia achieved a record high rate of 95.09% five-year-olds fully immunised against infectious diseases, even surpassing the government’s target of 95%, which provides “herd immunity”. Continue reading... Go to Source 13/06/2024 - 18:59 /Natasha May Twitter: @hoffeldtcom
Read More
Business News

Shell is front runner for LNG assets of Temasek-owned Pavilion Energy

The Straits Times Business News Temasek was said to be seeking more than US$2 billion (S$2.69 billion) for the business. Go to Source 13/06/2024 - 03:18 / Twitter: @hoffeldtcom
Read More
Management

Former Express Scripts exec can’t take CVS job, appeals court rules

Human Resources News - Human Resources News Headlines | Bizjournals.com A former president of Express Scripts, the St. Louis-based pharmacy benefits management arm of a Cigna subsidiary, is still barred from taking a job with CVS Health that she was named to over a year ago, according to an appeals court ruling. Go to Source 13/06/2024 - 00:03 /Diana Barr Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Symposium highlights scale of mental health crisis and novel methods of diagnosis and treatment

MIT News - Artificial intelligence Digital technologies, such as smartphones and machine learning, have revolutionized education. At the McGovern Institute for Brain Research’s 2024 Spring Symposium, “Transformational Strategies in Mental Health,” experts from across the sciences — including psychiatry, psychology, neuroscience, computer science, and others — agreed that these technologies could also play a significant role in advancing the diagnosis and treatment of mental health disorders and neurological conditions.Co-hosted by the McGovern Institute, MIT Open Learning, McClean Hospital, the Poitras Center for Psychiatric Disorders Research at MIT, and the Wellcome Trust, the symposium raised the alarm about the rise in mental health challenges and showcased the potential for novel diagnostic and treatment methods.John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology at MIT, kicked off the symposium with a call for an effort on par with the Manhattan Project, which in the 1940s saw leading scientists collaborate to do what seemed impossible. While the challenge of mental health is quite different, Gabrieli stressed, the complexity and urgency of the issue are similar. In his later talk, “How can science serve psychiatry to enhance mental health?,” he noted a 35 percent rise in teen suicide deaths between 1999 and 2000 and, between 2007 and 2015, a 100 percent increase in emergency room visits for youths ages 5 to 18 who experienced a suicide attempt or suicidal ideation.“We have no moral ambiguity, but all of us speaking today are having this meeting in part because we feel this urgency,” said Gabrieli, who is also a professor of brain and cognitive sciences, the director of the Integrated Learning Initiative (MITili) at MIT Open Learning, and a member of the McGovern Institute. "We have to do something together as a community of scientists and partners of all kinds to make a difference.”An...
Read More
Artificial Intelligence

Build a custom UI for Amazon Q Business

AWS Machine Learning Blog Amazon Q is a new generative artificial intelligence (AI)-powered assistant designed for work that can be tailored to your business. Amazon Q can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories and enterprise systems. When you chat with Amazon Q, it provides immediate, relevant information and advice to help streamline tasks, speed up decision-making, and spark creativity and innovation at work. For more information, see Amazon Q Business, now generally available, helps boost workforce productivity with generative AI. This post demonstrates how to build a custom UI for Amazon Q Business. The customized UI allows you to implement special features like handling feedback, using company brand colors and templates, and using a custom login. It also enables conversing with Amazon Q through an interface personalized to your use case. Solution overview In this solution, we deploy a custom web experience for Amazon Q to deliver quick, accurate, and relevant answers to your business questions on top of an enterprise knowledge base. The following diagram illustrates the solution architecture. The workflow includes the following steps: The user accesses the chatbot application, which is hosted behind an Application Load Balancer. After the user logs in, they’re redirected to the Amazon Cognito login page for authentication. This solution uses an Amazon Cognito user pool as an OAuth-compatible identity provider (IdP), which is required in order to exchange a token with AWS IAM Identity Center and later on interact with the Amazon Q Business APIs. For more information about trusted token issuers and how token exchanges are performed, see Using applications with a trusted token issuer. If you already have an OAuth-compatible IdP, you can use it instead of setting an...
Read More
Artificial Intelligence

Scalable intelligent document processing using Amazon Bedrock

AWS Machine Learning Blog In today’s data-driven business landscape, the ability to efficiently extract and process information from a wide range of documents is crucial for informed decision-making and maintaining a competitive edge. However, traditional document processing workflows often involve complex and time-consuming manual tasks, hindering productivity and scalability. In this post, we discuss an approach that uses the Anthropic Claude 3 Haiku model on Amazon Bedrock to enhance document processing capabilities. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading artificial intelligence (AI) startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using the AWS tools without having to manage any infrastructure. At the heart of this solution lies the Anthropic Claude 3 Haiku model, the fastest and most affordable model in its intelligence class. With state-of-the-art vision capabilities and strong performance on industry benchmarks, Anthropic Claude 3 Haiku is a versatile solution for a wide range of enterprise applications. By using the advanced natural language processing (NLP) capabilities of Anthropic Claude 3 Haiku, our intelligent document processing (IDP) solution can extract valuable data directly from images, eliminating the need for complex postprocessing. Scalable and efficient data extraction Our solution overcomes the traditional limitations of document processing by addressing the following key challenges: Simple prompt-based extraction – This solution allows you to define the specific data you need to extract from the documents through intuitive prompts. The Anthropic Claude 3 Haiku model then processes the documents and returns the desired information, streamlining the entire workflow. Handling larger file...
Read More
Artificial Intelligence

Use weather data to improve forecasts with Amazon SageMaker Canvas

AWS Machine Learning Blog Photo by Zbynek Burival on Unsplash Time series forecasting is a specific machine learning (ML) discipline that enables organizations to make informed planning decisions. The main idea is to supply historic data to an ML algorithm that can identify patterns from the past and then use those patterns to estimate likely values about unseen periods in the future. Amazon has a long heritage of using time series forecasting, dating back to the early days of having to meet mail-order book demand. Fast forward more than a quarter century and advanced forecasting using modern ML algorithms is offered to customers through Amazon SageMaker Canvas, a no-code workspace for all phases of ML. SageMaker Canvas enables you to prepare data using natural language, build and train highly accurate models, generate predictions, and deploy models to production—all without writing a single line of code. In this post, we describe how to use weather data to build and implement a forecasting cycle that you can use to elevate your business’ planning capabilities. Business use cases for time series forecasting Today, companies of every size and industry who invest in forecasting capabilities can improve outcomes—whether measured financially or in customer satisfaction—compared to using intuition-based estimation. Regardless of industry, every customer desires highly accurate models that can maximize their outcome. Here, accuracy means that future estimates produced by the ML model end up being as close as possible to the actual future. If the ML model estimates either too high or too low, it can reduce the effectiveness the business was hoping to achieve. To maximize accuracy, ML models benefit from rich, quality data that reflects demand patterns, including cycles of highs and lows, and periods of stability. The shape of these historic patterns may be driven by several factors. Examples include...
Read More
Artificial Intelligence

Researchers use large language models to help robots navigate

MIT News - Artificial intelligence Someday, you may want your home robot to carry a load of dirty clothes downstairs and deposit them in the washing machine in the far-left corner of the basement. The robot will need to combine your instructions with its visual observations to determine the steps it should take to complete this task.For an AI agent, this is easier said than done. Current approaches often utilize multiple hand-crafted machine-learning models to tackle different parts of the task, which require a great deal of human effort and expertise to build. These methods, which use visual representations to directly make navigation decisions, demand massive amounts of visual data for training, which are often hard to come by.To overcome these challenges, researchers from MIT and the MIT-IBM Watson AI Lab devised a navigation method that converts visual representations into pieces of language, which are then fed into one large language model that achieves all parts of the multistep navigation task.Rather than encoding visual features from images of a robot’s surroundings as visual representations, which is computationally intensive, their method creates text captions that describe the robot’s point-of-view. A large language model uses the captions to predict the actions a robot should take to fulfill a user’s language-based instructions.Because their method utilizes purely language-based representations, they can use a large language model to efficiently generate a huge amount of synthetic training data.While this approach does not outperform techniques that use visual features, it performs well in situations that lack enough visual data for training. The researchers found that combining their language-based inputs with visual signals leads to better navigation performance.“By purely using language as the perceptual representation, ours is a more straightforward approach. Since all the inputs can be encoded as language, we can generate a human-understandable trajectory,” says Bowen...
Read More
Artificial Intelligence

Making climate models relevant for local decision-makers

MIT News - Artificial intelligence Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing...
Read More
Artificial Intelligence

New algorithm discovers language just by watching videos

MIT News - Artificial intelligence Mark Hamilton, an MIT PhD student in electrical engineering and computer science and affiliate of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), wants to use machines to understand how animals communicate. To do that, he set out first to create a system that can learn human language “from scratch.”“Funny enough, the key moment of inspiration came from the movie ‘March of the Penguins.’ There’s a scene where a penguin falls while crossing the ice, and lets out a little belabored groan while getting up. When you watch it, it’s almost obvious that this groan is standing in for a four letter word. This was the moment where we thought, maybe we need to use audio and video to learn language.” says Hamilton. “Is there a way we could let an algorithm watch TV all day and from this figure out what we're talking about?”“Our model, ‘DenseAV,’ aims to learn language by predicting what it’s seeing from what it’s hearing, and vice-versa. For example, if you hear the sound of someone saying ‘bake the cake at 350’ chances are you might be seeing a cake or an oven. To succeed at this audio-video matching game across millions of videos, the model has to learn what people are talking about,” says Hamilton.Once they trained DenseAV on this matching game, Hamilton and his colleagues looked at which pixels the model looked for when it heard a sound. For example, when someone says “dog,” the algorithm immediately starts looking for dogs in the video stream. By seeing which pixels are selected by the algorithm, one can discover what the algorithm thinks a word means.Interestingly, a similar search process happens when DenseAV listens to a dog barking: It searches for a dog in the video stream. “This piqued our interest....
Read More
Artificial Intelligence

Reimagining software development with the Amazon Q Developer Agent

AWS Machine Learning Blog Amazon Q Developer is an AI-powered assistant for software development that reimagines the experience across the entire software development lifecycle, making it faster to build, secure, manage, and optimize applications on or off of AWS. The Amazon Q Developer Agent includes an agent for feature development that automatically implements multi-file features, bug fixes, and unit tests in your integrated development environment (IDE) workspace using natural language input. After you enter your query, the software development agent analyzes your code base and formulates a plan to fulfill the request. You can accept the plan or ask the agent to iterate on it. After the plan is validated, the agent generates the code changes needed to implement the feature you requested. You can then review and accept the code changes or request a revision. Amazon Q Developer uses generative artificial intelligence (AI) to deliver state-of-the-art accuracy for all developers, taking first place on the leaderboard for SWE-bench, a dataset that tests a system’s ability to automatically resolve GitHub issues. This post describes how to get started with the software development agent, gives an overview of how the agent works, and discusses its performance on public benchmarks. We also delve into the process of getting started with the Amazon Q Developer Agent and give an overview of the underlying mechanisms that make it a state-of-the-art feature development agent. Getting started To get started, you need to have an AWS Builder ID or be part of an organization with an AWS IAM Identity Center instance set up that allows you to use Amazon Q. To use Amazon Q Developer Agent for feature development in Visual Studio Code, start by installing the Amazon Q extension. The extension is also available for JetBrains, Visual Studio (in preview), and in the Command Line...
Read More
Talent Management

Psychology in construction: A psychologist shares her insights

Everyone's Blog Posts - RecruitingBlogs Celebrity psychologist and international speaker Charissa Bloomberg has a history of applying her skills in the engineering, mining, and construction industries. Here, she shares her approach, from initial needs analysis to the human element that should never be underestimated. Bloomberg, known for her guest appearances across radio and TV stations, has a passion for integrity and mental health awareness, which she has applied for over a decade in the engineering, mining, and construction industries. Fondly known as the “site shrink”, Bloomberg believes that companies in this niche often forget that it’s people who build our projects and infrastructure. As a site psychologist, she works in close collaboration with the managing director and his team. At times, she is also called upon to advise at EXCO level. “The best time to be roped in is at the start of a project. Later on, it can be tricky to iron out problems when an incompatible team is facing issues due to different management styles. Key aspects to remember include motivation, morale, personality traits, poor leadership, low EQ, integrity, corruption, communication issues, and the importance of adhering to health and safety protocols,” she enthuses. Bloomberg is not afraid to get dirty on site or to counter a foreman who questions why they have been booked for a two-hour strengthening session when they are on a tight schedule with a billion-dollar project. “I just get on with the training, only to find that the engineers and other industry specialists enjoy the sessions and go back to work refreshed; they also claim they are able to take the knowledge with them for application both in their work environment and home lives.” On-site training should be fun, she says, explaining that incorporating role playing, sharing opinions and stories, brainstorming, and even...
Read More
Artificial Intelligence

Get started quickly with AWS Trainium and AWS Inferentia using AWS Neuron DLAMI and AWS Neuron DLC

AWS Machine Learning Blog Starting with the AWS Neuron 2.18 release, you can now launch Neuron DLAMIs (AWS Deep Learning AMIs) and Neuron DLCs (AWS Deep Learning Containers) with the latest released Neuron packages on the same day as the Neuron SDK release. When a Neuron SDK is released, you’ll now be notified of the support for Neuron DLAMIs and Neuron DLCs in the Neuron SDK release notes, with a link to the AWS documentation containing the DLAMI and DLC release notes. In addition, this release introduces a number of features that help improve user experience for Neuron DLAMIs and DLCs. In this post, we walk through some of the support highlights with Neuron 2.18. Neuron DLC and DLAMI overview and announcements The DLAMI is a pre-configured AMI that comes with popular deep learning frameworks like TensorFlow, PyTorch, Apache MXNet, and others pre-installed. This allows machine learning (ML) practitioners to rapidly launch an Amazon Elastic Compute Cloud (Amazon EC2) instance with a ready-to-use deep learning environment, without having to spend time manually installing and configuring the required packages. The DLAMI supports various instance types, including Neuron Trainium and Inferentia powered instances, for accelerated training and inference. AWS DLCs provide a set of Docker images that are pre-installed with deep learning frameworks. The containers are optimized for performance and available in Amazon Elastic Container Registry (Amazon ECR). DLCs make it straightforward to deploy custom ML environments in a containerized manner, while taking advantage of the portability and reproducibility benefits of containers. Multi-Framework DLAMIs The Neuron Multi-Framework DLAMI for Ubuntu 22 provides separate virtual environments for multiple ML frameworks: PyTorch 2.1, PyTorch 1.13, Transformers NeuronX, and TensorFlow 2.10. DLAMI offers you the convenience of having all these popular frameworks readily available in a single AMI, simplifying their setup and reducing the need...
Read More
Artificial Intelligence

Sprinklr improves performance by 20% and reduces cost by 25% for machine learning inference on AWS Graviton3

AWS Machine Learning Blog This is a guest post co-written with Ratnesh Jamidar and Vinayak Trivedi from Sprinklr. Sprinklr’s mission is to unify silos, technology, and teams across large, complex companies. To achieve this, we provide four product suites, Sprinklr Service, Sprinklr Insights, Sprinklr Marketing, and Sprinklr Social, as well as several self-serve offerings. Each of these products are infused with artificial intelligence (AI) capabilities to deliver exceptional customer experience. Sprinklr’s specialized AI models streamline data processing, gather valuable insights, and enable workflows and analytics at scale to drive better decision-making and productivity. In this post, we describe the scale of our AI offerings, the challenges with diverse AI workloads, and how we optimized mixed AI workload inference performance with AWS Graviton3 based c7g instances and achieved 20% throughput improvement, 30% latency reduction, and reduced our cost by 25–30%. Sprinklr’s AI scale and challenges with diverse AI workloads Our purpose-built AI processes unstructured customer experience data from millions of sources, providing actionable insights and improving productivity for customer-facing teams to deliver exceptional experiences at scale. To understand our scaling and cost challenges, let’s look at some representative numbers. Sprinklr’s platform uses thousands of servers that fine-tune and serve over 750 pre-built AI models across over 60 verticals, and run more than 10 billion predictions per day. To deliver a tailored user experience across these verticals, we deploy patented AI models fine-tuned for specific business applications and use nine layers of machine learning (ML) to extract meaning from data across formats: automatic speech recognition, natural language processing, computer vision, network graph analysis, anomaly detection, trends, predictive analysis, natural language generation, and similarity engine. The diverse and rich database of models brings unique challenges for choosing the most efficient deployment infrastructure that gives the best latency and performance. For example, for mixed...
Read More
Artificial Intelligence

New computer vision method helps speed up screening of electronic materials

MIT News - Artificial intelligence Boosting the performance of solar cells, transistors, LEDs, and batteries will require better electronic materials, made from novel compositions that have yet to be discovered.To speed up the search for advanced functional materials, scientists are using AI tools to identify promising materials from hundreds of millions of chemical formulations. In tandem, engineers are building machines that can print hundreds of material samples at a time based on chemical compositions tagged by AI search algorithms.But to date, there’s been no similarly speedy way to confirm that these printed materials actually perform as expected. This last step of material characterization has been a major bottleneck in the pipeline of advanced materials screening.Now, a new computer vision technique developed by MIT engineers significantly speeds up the characterization of newly synthesized electronic materials. The technique automatically analyzes images of printed semiconducting samples and quickly estimates two key electronic properties for each sample: band gap (a measure of electron activation energy) and stability (a measure of longevity).The new technique accurately characterizes electronic materials 85 times faster compared to the standard benchmark approach.The researchers intend to use the technique to speed up the search for promising solar cell materials. They also plan to incorporate the technique into a fully automated materials screening system.“Ultimately, we envision fitting this technique into an autonomous lab of the future,” says MIT graduate student Eunice Aissi. “The whole system would allow us to give a computer a materials problem, have it predict potential compounds, and then run 24-7 making and characterizing those predicted materials until it arrives at the desired solution.”“The application space for these techniques ranges from improving solar energy to transparent electronics and transistors,” adds MIT graduate student Alexander (Aleks) Siemenn. “It really spans the full gamut of where semiconductor materials can benefit society.”Aissi...
Read More
Artificial Intelligence

Code generation using Code Llama 70B and Mixtral 8x7B on Amazon SageMaker

AWS Machine Learning Blog In the ever-evolving landscape of machine learning and artificial intelligence (AI), large language models (LLMs) have emerged as powerful tools for a wide range of natural language processing (NLP) tasks, including code generation. Among these cutting-edge models, Code Llama 70B stands out as a true heavyweight, boasting an impressive 70 billion parameters. Developed by Meta and now available on Amazon SageMaker, this state-of-the-art LLM promises to revolutionize the way developers and data scientists approach coding tasks. What is Code Llama 70B and Mixtral 8x7B? Code Llama 70B is a variant of the Code Llama foundation model (FM), a fine-tuned version of Meta’s renowned Llama 2 model. This massive language model is specifically designed for code generation and understanding, capable of generating code from natural language prompts or existing code snippets. With its 70 billion parameters, Code Llama 70B offers unparalleled performance and versatility, making it a game-changer in the world of AI-assisted coding. Mixtral 8x7B is a state-of-the-art sparse mixture of experts (MoE) foundation model released by Mistral AI. It supports multiple use cases such as text summarization, classification, text generation, and code generation. It is an 8x model, which means it contains eight distinct groups of parameters. The model has about 45 billion total parameters and supports a context length of 32,000 tokens. MoE is a type of neural network architecture that consists of multiple experts” where each expert is a neural network. In the context of transformer models, MoE replaces some feed-forward layers with sparse MoE layers. These layers have a certain number of experts, and a router network selects which experts process each token at each layer. MoE models enable more compute-efficient and faster inference compared to dense models. Key features and capabilities of Code Llama 70B and Mixtral 8x7B include: Code generation:...
Read More
Covid-19

COVID-flu shot offers strong immune response in late-stage trial, Moderna says

Moderna says it combination vaccine to protect against both COVID-19 and influenza generated a stronger immune response in adults 50 and over when compared to separate shots. Go to Source 10/06/2024 - 15:33 / Twitter: @hoffeldtcom
Read More
Covid-19

Moderna combi flu and Covid jab gives better protection, study finds

Coronavirus | The Guardian Clinical trials show Spikevax may bring about higher immune responses than separate inoculations A combined flu and coronavirus vaccine brings about a higher immune response to both diseases than when the vaccines are administered separately, a clinical trial has shown.Moderna, the biotech firm behind the Spikevax vaccine used in NHS booster programmes, is trialling a two-in-one jab that can also protect from the flu. Initial results have shown it may be better at protecting against them than what is now being used. Continue reading... Go to Source 10/06/2024 - 15:21 /Tobi Thomas Health and Inequalities Correspondent Twitter: @hoffeldtcom
Read More
Business News

Gold is getting harder to find as miners struggle to excavate more, World Gold Council says

US Top News and Analysis The gold mining industry is struggling to sustain production growth as deposits of the yellow metal become harder to find, said the World Gold Council. Go to Source 10/06/2024 - 03:31 / Twitter: @hoffeldtcom
Read More
Covid-19

Retiring head of Barrie food bank reflects on challenges of pandemic, jump in demand

After seeing the agency through a global pandemic and an unprecedented jump in demand, the head of Barrie’s Food Bank is retiring. Go to Source 08/06/2024 - 09:13 /Sawyer Bogdan Twitter: @hoffeldtcom
Read More
Business News

Here’s where the jobs are for May 2024 — in one chart

US Top News and Analysis Job growth in May came out surprisingly strong, pushing back on lingering fears of a broader economic slowdown. Go to Source 07/06/2024 - 15:38 / Twitter: @hoffeldtcom
Read More
Psychology

Webinar: NIH’s Definition of a Clinical Trial

NIMH News Feed Experts from the National Institute of Mental Health (NIMH) will provide an overview of NIH clinical trial classifications, with a particular focus on global mental health research. Go to Source 07/06/2024 - 06:13 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Business News

How to do business better by reading ‘non-business’ books

The Straits Times Business News The trick is to know that the best business books tend not to be written for reasons of business. Go to Source 07/06/2024 - 00:24 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

A data-driven approach to making better choices

MIT News - Artificial intelligence Imagine a world in which some important decision — a judge’s sentencing recommendation, a child’s treatment protocol, which person or business should receive a loan — was made more reliable because a well-designed algorithm helped a key decision-maker arrive at a better choice. A new MIT economics course is investigating these interesting possibilities.Class 14.163 (Algorithms and Behavioral Science) is a new cross-disciplinary course focused on behavioral economics, which studies the cognitive capacities and limitations of human beings. The course was co-taught this past spring by assistant professor of economics Ashesh Rambachan and visiting lecturer Sendhil Mullainathan.Rambachan studies the economic applications of machine learning, focusing on algorithmic tools that drive decision-making in the criminal justice system and consumer lending markets. He also develops methods for determining causation using cross-sectional and dynamic data.Mullainathan will soon join the MIT departments of Electrical Engineering and Computer Science and Economics as a professor. His research uses machine learning to understand complex problems in human behavior, social policy, and medicine. Mullainathan co-founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) in 2003.The new course’s goals are both scientific (to understand people) and policy-driven (to improve society by improving decisions). Rambachan believes that machine-learning algorithms provide new tools for both the scientific and applied goals of behavioral economics.“The course investigates the deployment of computer science, artificial intelligence (AI), economics, and machine learning in service of improved outcomes and reduced instances of bias in decision-making,” Rambachan says.There are opportunities, Rambachan believes, for constantly evolving digital tools like AI, machine learning, and large language models (LLMs) to help reshape everything from discriminatory practices in criminal sentencing to health-care outcomes among underserved populations.Students learn how to use machine learning tools with three main objectives: to understand what they do and how they do it, to formalize behavioral economics insights...
Read More
Covid-19

Less grooming and more chores: How life changes when you work from home

Working from home spiked during the pandemic, and changed the way many people work and live. New data from Statistics Canada sheds light on its impacts. Go to Source 06/06/2024 - 19:03 /Uday Rana Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Build RAG applications using Jina Embeddings v2 on Amazon SageMaker JumpStart

AWS Machine Learning Blog Today, we are excited to announce that the Jina Embeddings v2 model, developed by Jina AI, is available for customers through Amazon SageMaker JumpStart to deploy with one click for running model inference. This state-of-the-art model supports an impressive 8,192-tokens context length. You can deploy this model with SageMaker JumpStart, a machine learning (ML) hub with foundation models, built-in algorithms, and pre-built ML solutions that you can deploy with just a few clicks. Text embedding refers to the process of transforming text into numerical representations that reside in a high-dimensional vector space. Text embeddings have a broad range of applications in enterprise artificial intelligence (AI), including the following: Multimodal search for ecommerce Content personalization Recommender systems Data analytics Jina Embeddings v2 is a state-of-the-art collection of text embedding models, trained by Berlin-based Jina AI, that boast high performance on several public benchmarks. In this post, we walk through how to discover and deploy the jina-embeddings-v2 model as part of a Retrieval Augmented Generation (RAG)-based question answering system in SageMaker JumpStart. You can use this tutorial as a starting point for a variety of chatbot-based solutions for customer service, internal support, and question answering systems based on internal and private documents. What is RAG? RAG is the process of optimizing the output of a large language model (LLM) so it references an authoritative knowledge base outside of its training data sources before generating a response. LLMs are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base, all without the need to retrain the model. It’s a cost-effective approach to improving LLM output so it...
Read More
Business News

SpaceX’s Starship rocket completes test flight for the first time, successfully splashes down

US Top News and Analysis The fourth Starship test flight completed new milestones as SpaceX continues to advance development of the mammoth vehicle. Go to Source 06/06/2024 - 15:29 / Twitter: @hoffeldtcom
Read More
Covid-19

Australia hit by ‘big wave’ of Covid at same time as increase in flu

Coronavirus | The Guardian Experts say both are at ‘critical point’ of escalation and that people should ensure they are up to date with vaccinationsGet our morning and afternoon news emails, free app or daily news podcastAustralia is experiencing a “big wave” of Covid-19 infections that is coinciding with a rise in ​influenza and other winter illnesses, health authorities and experts are warning.Deakin University’s epidemiology chair, Prof Catherine Bennett, said there was a direct alignment in the rise of Covid-19 and flu across the nation, which were “both at that critical point of takeoff where you see a rapid escalation.”Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup Continue reading... Go to Source 06/06/2024 - 12:33 /Natasha May Twitter: @hoffeldtcom
Read More
Business News

UG Healthcare makes acquisitions in Spain, Germany

The Straits Times Business News The moves aim to drive growth in downstream distribution business in Europe Go to Source 06/06/2024 - 06:14 / Twitter: @hoffeldtcom
Read More
Business News

Ant’s Singapore digital bank Anext eyes growing demand from foreign firms

The Straits Times Business News More than 30 per cent of the bank’s customers were foreign business owners, spanning 78 nationalities, as at the end of May. Go to Source 06/06/2024 - 03:43 / Twitter: @hoffeldtcom
Read More
Business News

US services sector activity rebounds while private payrolls growth slows

The Straits Times Business News The reports paint a mixed picture of an economy that continues to withstand the hefty rate increases Go to Source 06/06/2024 - 03:13 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Mouth-based touchpad enables people living with paralysis to interact with computers

MIT News - Artificial intelligence When Tomás Vega SM ’19 was 5 years old, he began to stutter. The experience gave him an appreciation for the adversity that can come with a disability. It also showed him the power of technology.“A keyboard and a mouse were outlets,” Vega says. “They allowed me to be fluent in the things I did. I was able to transcend my limitations in a way, so I became obsessed with human augmentation and with the concept of cyborgs. I also gained empathy. I think we all have empathy, but we apply it according to our own experiences.”Vega has been using technology to augment human capabilities ever since. He began programming when he was 12. In high school, he helped people manage disabilities including hand impairments and multiple sclerosis. In college, first at the University of California at Berkeley and then at MIT, Vega built technologies that helped people with disabilities live more independently.Today Vega is the co-founder and CEO of Augmental, a startup deploying technology that lets people with movement impairments seamlessly interact with their personal computational devices.Augmental’s first product is the MouthPad, which allows users to control their computer, smartphone, or tablet through tongue and head movements. The MouthPad’s pressure-sensitive touch pad sits on the roof of the mouth, and, working with a pair of motion sensors, translates tongue and head gestures into cursor scrolling and clicks in real time via Bluetooth.“We have a big chunk of the brain that is devoted to controlling the position of the tongue,” Vega explains. “The tongue comprises eight muscles, and most of the muscle fibers are slow-twitch, which means they don’t fatigue as quickly. So, I thought why don’t we leverage all of that?”People with spinal cord injuries are already using the MouthPad every day to interact with...
Read More
Artificial Intelligence

Detect email phishing attempts using Amazon Comprehend

AWS Machine Learning Blog Phishing is the process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity using email, telephone or text messages. There are many types of phishing based on the mode of communication and targeted victims. In an Email phishing attempt, an email is sent as a mode of communication to group of people. There are traditional rule-based approaches to detect email phishing. However, new trends are emerging that are hard to handle with a rule-based approach. There is need to use machine learning (ML) techniques to augment rule-based approaches for email phishing detection. In this post, we show how to use Amazon Comprehend Custom to train and host an ML model to classify if the input email is an phishing attempt or not. Amazon Comprehend is a natural-language processing (NLP) service that uses ML to uncover valuable insights and connections in text. You can use Amazon Comprehend to identify the language of the text; extract key phrases, places, people, brands, or events; understand sentiment about products or services; and identify the main topics from a library of documents. You can customize Amazon Comprehend for your specific requirements without the skillset required to build ML-based NLP solutions. Comprehend Custom builds customized NLP models on your behalf, using training data that you provide. Comprehend Custom supports custom classification and custom entity recognition. Solution overview This post explains how you can use Amazon Comprehend to easily train and host an ML based model to detect phishing attempt. The following diagram shows how the phishing detection works. You can use this solution with your email servers in which emails are passed through this phishing detector. When an email is flagged as a phishing attempt, the email recipient still gets the...
Read More
Business News

Nvidia passes Apple in market cap as second-most valuable public U.S. company

US Top News and Analysis Investors are becoming more comfortable that Nvidia's huge growth in sales to a handful of cloud companies can persist. Go to Source 05/06/2024 - 21:33 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

How Skyflow creates technical content in days using Amazon Bedrock

AWS Machine Learning Blog This guest post is co-written with Manny Silva, Head of Documentation at Skyflow, Inc. Startups move quickly, and engineering is often prioritized over documentation. Unfortunately, this prioritization leads to release cycles that don’t match, where features release but documentation lags behind. This leads to increased support calls and unhappy customers. Skyflow is a data privacy vault provider that makes it effortless to secure sensitive data and enforce privacy policies. Skyflow experienced this growth and documentation challenge in early 2023 as it expanded globally from 8 to 22 AWS Regions, including China and other areas of the world such as Saudi Arabia, Uzbekistan, and Kazakhstan. The documentation team, consisting of only two people, found itself overwhelmed as the engineering team, with over 60 people, updated the product to support the scale and rapid feature release cycles. Given the critical nature of Skyflow’s role as a data privacy company, the stakes were particularly high. Customers entrust Skyflow with their data and expect Skyflow to manage it both securely and accurately. The accuracy of Skyflow’s technical content is paramount to earning and keeping customer trust. Although new features were released every other week, documentation for the features took an average of 3 weeks to complete, including drafting, review, and publication. The following diagram illustrates their content creation workflow. Looking at our documentation workflows, we at Skyflow discovered areas where generative artificial intelligence (AI) could improve our efficiency. Specifically, creating the first draft—often referred to as overcoming the “blank page problem”—is typically the most time-consuming step. The review process could also be long depending on the number of inaccuracies found, leading to additional revisions, additional reviews, and additional delays. Both drafting and reviewing needed to be shorter to make doc target timelines match those of engineering. To do this, Skyflow...
Read More
Business News

Private payrolls growth slows to 152,000 in May, much less than expected, ADP says

US Top News and Analysis Private job creation slowed more than expected in May, signaling further slowing in the labor market. Go to Source 05/06/2024 - 15:53 / Twitter: @hoffeldtcom
Read More
Business News

Australia economy slows to a crawl in Q1 as households feel inflation squeeze

The Straits Times Business News Annual growth dropped to 1.1 per cent, the slowest pace in three decades. Go to Source 05/06/2024 - 06:09 / Twitter: @hoffeldtcom
Read More
Business News

Yoma says land business unit not involved in sale of Thai properties after shares soar

The Straits Times Business News SINGAPORE - Yoma Strategic clarified on June 5 that its land unit, Yoma Land, is not involved in the business of selling properties in Thailand. Go to Source 05/06/2024 - 03:39 / Twitter: @hoffeldtcom
Read More
Management

Aimbridge Hospitality picks up CFO from Velvet Taco

Human Resources News - Human Resources News Headlines | Bizjournals.com The CFO will join Aimbridge Hospitality on July 8. The Plano-based hospitality management company oversees properties including The Statler in downtown Dallas and the Sheraton Fort Worth Downtown Hotel. Go to Source 05/06/2024 - 00:04 /Alexa Reed Twitter: @hoffeldtcom
Read More
Management

Introducing 15Five’s Evolution as a Strategic Command Center for Performance Management

15Five Our newest enhancements include executive insights, strategic action planning, and AI-guided manager support, unlocking the power of existing people data to drive higher performance, engagement and retention. Only 2% of CHROs think conventional performance management practices are actually working.  Ouch. The conventional approach has been stagnant for decades, but the last thing HR teams need right now are more needless tactics added to their already overburdened plates.  That’s why we’re so excited to announce a major platform evolution for 15Five, giving HR teams a powerful new way to understand the intersection of employee performance, engagement, and retention data, implement strategic action plans, and track measurable impact. Every performance review, employee engagement survey and other HR program yields a wealth of untapped insights. 15Five is giving HR teams the power of their data, helping them see and identify what matters most, broker action through managers, and track the impact at every step. A strategic command center for performance management 15Five’s HR Outcomes Dashboard is further evolving as a strategic command center for performance management programs, empowering HR leaders to easily explore their own data and develop strategic action plans with leaders and managers. Our newest capabilities include: Trending insights and data visualizations: Historical trend lines for employee performance, engagement and retention are automatically generated from existing people data. This creates a shared understanding across the entire organization, clarifying what’s working and what’s not. Demographic and performance filters: New filters give HR teams total control over analyzing how HR outcomes vary across demographic attributes such as age, gender, and department, as well as by performance designations and engagement levels. These filters provide deeper insights into specific groups, enabling more targeted and effective HR strategies. Executive dashboards: HR teams can customize and share executive level dashboards with the rest of their leadership...
Read More
Artificial Intelligence

Streamline custom model creation and deployment for Amazon Bedrock with Provisioned Throughput using Terraform

AWS Machine Learning Blog As customers seek to incorporate their corpus of knowledge into their generative artificial intelligence (AI) applications, or to build domain-specific models, their data science teams often want to conduct A/B testing and have repeatable experiments. In this post, we discuss a solution that uses infrastructure as code (IaC) to define the process of retrieving and formatting data for model customization and initiating the model customization. This enables you to version and iterate as needed. With Amazon Bedrock, you can privately and securely customize foundation models (FMs) with your own data to build applications that are specific to your domain, organization, and use case. With custom models, you can create unique user experiences that reflect your company’s style, voice, and services. Amazon Bedrock supports two methods of model customization: Fine-tuning allows you to increase model accuracy by providing your own task-specific labeled training dataset and further specialize your FMs. Continued pre-training allows you to train models using your own unlabeled data in a secure and managed environment and supports customer-managed keys. Continued pre-training helps models become more domain-specific by accumulating more robust knowledge and adaptability—beyond their original training. In this post, we provide guidance on how to create an Amazon Bedrock custom model using HashiCorp Terraform that allows you to automate the process, including preparing datasets used for customization. Terraform is an IaC tool that allows you to manage AWS resources, software as a service (SaaS) resources, datasets, and more, using declarative configuration. Terraform provides the benefits of automation, versioning, and repeatability. Solution overview We use Terraform to download a public dataset from the Hugging Face Hub, convert it to JSONL format, and upload it to an Amazon Simple Storage Service (Amazon S3) bucket with a versioned prefix. We then create an Amazon Bedrock custom model using...
Read More
Artificial Intelligence

Boost productivity with video conferencing transcripts and summaries with the Amazon Chime SDK Meeting Summarizer solution

AWS Machine Learning Blog Businesses today heavily rely on video conferencing platforms for effective communication, collaboration, and decision-making. However, despite the convenience these platforms offer, there are persistent challenges in seamlessly integrating them into existing workflows. One of the major pain points is the lack of comprehensive tools to automate the process of joining meetings, recording discussions, and extracting actionable insights from them. This gap results in inefficiencies, missed opportunities, and limited productivity, hindering the seamless flow of information and decision-making processes within organizations. To address this challenge, we’ve developed the Amazon Chime SDK Meeting Summarizer application deployed with the Amazon Cloud Development Kit (AWS CDK). This application uses an Amazon Chime SDK SIP media application, Amazon Transcribe, and Amazon Bedrock to seamlessly join meetings, record meeting audio, and process recordings for transcription and summarization. By integrating these services programmatically through the AWS CDK, we aim to streamline the meeting workflow, empower users with actionable insights, and drive better decision-making outcomes. Our solution currently integrates with popular platforms such as Amazon Chime, Zoom, Cisco Webex, Microsoft Teams, and Google Meet. In addition to deploying the solution, we’ll also teach you the intricacies of prompt engineering in this post. We guide you through addressing parsing and information extraction challenges, including speaker diarization, call scheduling, summarization, and transcript cleaning. Through detailed instructions and structured approaches tailored to each use case, we illustrate the effectiveness of Amazon Bedrock, powered by Anthropic Claude models. Solution overview The following infrastructure diagram provides an overview of the AWS services that are used to create this meeting summarization bot. The core services used in this solution are: An Amazon Chime SDK SIP Media Application is used to dial into the meeting and record meeting audio Amazon Transcribe is used to perform speech-to-text processing of the recorded audio,...
Read More
Covid-19

No value for money in N.B. use of travel nurses, says auditor general

The province's auditor general said the roughly $173 million the province spent on travel nurses was not justified and didn't correlate to COVID-19-related staff vacancies. Go to Source 04/06/2024 - 15:48 / Twitter: @hoffeldtcom
Read More
Covid-19

Covid charity scam trial juror says she was given bag with $120,000 cash to acquit defendants

Coronavirus | The Guardian Juror reported she was offered money to acquit seven charged with stealing more than $40m from program meant to feed childrenA federal juror was dismissed from duty on Monday after reporting that a woman dropped a bag of $120,000 in cash at her home – and offered her more money if she would vote to acquit seven people charged with stealing more than $40m from a program meant to feed children during the pandemic.“This is completely beyond the pale,” said Joseph Thompson, assistant US attorney, in court on Monday. “This is outrageous behavior. This is stuff that happens in mob movies.” Continue reading... Go to Source 04/06/2024 - 15:48 /Associated Press Twitter: @hoffeldtcom
Read More
Covid-19

Fauci describes ‘credible death threats’ for overseeing US Covid-19 response

Coronavirus | The Guardian Doctor, who was head of infectious diseases unit during height of the pandemic, tells Congress he and his family still get harassedAnthony Fauci, the former head of the US infectious diseases unit, has received “credible death threats” stemming from his time overseeing the nation’s fight against Covid-19, he has told Congress.Fauci, who was director of the National Institute of Allergy and Infectious Diseases during the height of attempts to halt the spread of the virus, told a hearing on Capitol Hill that the threats had continued until the present day, even though he retired in 2022. Continue reading... Go to Source 04/06/2024 - 00:04 /Robert Tait in Washington Twitter: @hoffeldtcom
Read More
Business News

Why this entrepreneur chose to spend up to $120k a year on a community initiative

The Straits Times Business News The founder of Repair Kopitiam, which gives broken items a second life, shares why he’s against “maximising profits”. Go to Source 04/06/2024 - 00:03 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Prioritizing employee well-being: An innovative approach with generative AI and Amazon SageMaker Canvas

AWS Machine Learning Blog In today’s fast-paced corporate landscape, employee mental health has become a crucial aspect that organizations can no longer overlook. Many companies recognize that their greatest asset lies in their dedicated workforce, and each employee plays a vital role in collective success. As such, promoting employee well-being by creating a safe, inclusive, and supportive environment is of utmost importance. However, quantifying and assessing mental health can be a daunting task. Traditional methods like employee well-being surveys or manual approaches may not always provide the most accurate or actionable insights. In this post, we explore an innovative solution that uses Amazon SageMaker Canvas for mental health assessment at the workplace. We delve into the following topics: The importance of mental health in the workplace An overview of the SageMaker Canvas low-code no-code platform for building machine learning (ML) models The mental health assessment model: Data preparation using the chat feature Training the model on SageMaker Canvas Model evaluation and performance metrics Deployment and integration: Deploying the mental health assessment model Integrating the model into workplace wellness programs or HR systems In this post, we use a dataset from a 2014 survey that measures attitudes towards mental health and frequency of mental health disorders in the tech workplace, then we aggregate and prepare data for an ML model using Amazon SageMaker Data Wrangler for a tabular dataset on SageMaker Canvas. Then we train, build, test, and deploy the model using SageMaker Canvas, without writing any code. Discover how SageMaker Canvas can revolutionize the way organizations approach employee mental health assessment, empowering them to create a more supportive and productive work environment. Stay tuned for insightful content that could reshape the future of workplace well-being. Importance of mental health Maintaining good mental health in the workplace is crucial for both...
Read More
Management

In Her Own Words: Donna Daniels bounces from the NBA to Chase Center GM

Human Resources News - Human Resources News Headlines | Bizjournals.com Women’s sports are gaining more attention, television exposure, sponsorship dollars and fans, and Donna Daniels’ career is a testimonial to the opportunities available to women in sports management. Growing up as a daughter of a football coach, it was not strange for me to have moved 13 times by the time I graduated high school, including a move in the middle of my senior year. Early mornings of getting on the road at 4 a.m. for a six-hour drive to watch a football game were also status quo,… Go to Source 03/06/2024 - 12:01 /Ellen Sherberg Twitter: @hoffeldtcom
Read More
Business News

Real estate executives notably more pessimistic on prime residential property market: Survey

The Straits Times Business News But major price corrections unlikely due to previously committed land and development costs. Go to Source 03/06/2024 - 09:06 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

A technique for more effective multipurpose robots

MIT News - Artificial intelligence Let’s say you want to train a robot so it understands how to use tools and can then quickly learn to make repairs around your house with a hammer, wrench, and screwdriver. To do that, you would need an enormous amount of data demonstrating tool use.Existing robotic datasets vary widely in modality — some include color images while others are composed of tactile imprints, for instance. Data could also be collected in different domains, like simulation or human demos. And each dataset may capture a unique task and environment.It is difficult to efficiently incorporate data from so many sources in one machine-learning model, so many methods use just one type of data to train a robot. But robots trained this way, with a relatively small amount of task-specific data, are often unable to perform new tasks in unfamiliar environments.In an effort to train better multipurpose robots, MIT researchers developed a technique to combine multiple sources of data across domains, modalities, and tasks using a type of generative AI known as diffusion models.They train a separate diffusion model to learn a strategy, or policy, for completing one task using one specific dataset. Then they combine the policies learned by the diffusion models into a general policy that enables a robot to perform multiple tasks in various settings.In simulations and real-world experiments, this training approach enabled a robot to perform multiple tool-use tasks and adapt to new tasks it did not see during training. The method, known as Policy Composition (PoCo), led to a 20 percent improvement in task performance when compared to baseline techniques.“Addressing heterogeneity in robotic datasets is like a chicken-egg problem. If we want to use a lot of data to train general robot policies, then we first need deployable robots to get all...
Read More
Business News

Orchard Road rejuvenation gets shot in the arm from Delfi Orchard sale

The Straits Times Business News URA’s Strategic Development Incentive scheme received nine applications, of which six were supported. Go to Source 02/06/2024 - 00:03 / Twitter: @hoffeldtcom
Read More
Business News

Parties present competing visions for jobs and growth

BBC News Keir Starmer said wealth creation was Labour's main goal and Rishi Sunak has pledged to give 30 towns across the UK £20m each. Go to Source 01/06/2024 - 12:04 / Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Pre-training genomic language models using AWS HealthOmics and Amazon SageMaker

AWS Machine Learning Blog Genomic language models are a new and exciting field in the application of large language models to challenges in genomics. In this blog post and open source project, we show you how you can pre-train a genomics language model, HyenaDNA, using your genomic data in the AWS Cloud. Here, we use AWS HealthOmics storage as a convenient and cost-effective omic data store and Amazon Sagemaker as a fully managed machine learning (ML) service to train and deploy the model. Genomic language models Genomic language models represent a new approach in the field of genomics, offering a way to understand the language of DNA. These models use the transformer architecture, a type of natural language processing (NLP), to interpret the vast amount of genomic information available, allowing researchers and scientists to extract meaningful insights more accurately than with existing in silico approaches and more cost-effectively than with existing in situ techniques. By bridging the gap between raw genetic data and actionable knowledge, genomic language models hold immense promise for various industries and research areas, including whole-genome analysis, delivered care, pharmaceuticals, and agriculture. They facilitate the discovery of novel gene functions, the identification of disease-causing mutations, and the development of personalized treatment strategies, ultimately driving innovation and advancement in genomics-driven fields. The ability to effectively analyze and interpret genomic data at scale is the key to precision medicine, agricultural optimization, and biotechnological breakthroughs, making genomic language models a possible new foundational technology in these industries. Some of the pioneering genomic language models include DNABERT which was one of the first attempts to use the transformer architecture to learn the language of DNA. DNABERT used a Bidirectional Encoder Representations from Transformers (BERT, encoder-only) architecture pre-trained on a human reference genome and showed promising results on downstream supervised tasks. Nucleotide...
Read More
Artificial Intelligence

Falcon 2 11B is now available on Amazon SageMaker JumpStart

AWS Machine Learning Blog Today, we are excited to announce that the first model in the next generation Falcon 2 family, the Falcon 2 11B foundation model (FM) from Technology Innovation Institute (TII), is available through Amazon SageMaker JumpStart to deploy and run inference. Falcon 2 11B is a trained dense decoder model on a 5.5 trillion token dataset and supports multiple languages. The Falcon 2 11B model is available on SageMaker JumpStart, a machine learning (ML) hub that provides access to built-in algorithms, FMs, and pre-built ML solutions that you can deploy quickly and get started with ML faster. In this post, we walk through how to discover, deploy, and run inference on the Falcon 2 11B model using SageMaker JumpStart. What is the Falcon 2 11B model Falcon 2 11B is the first FM released by TII under their new artificial intelligence (AI) model series Falcon 2. It’s a next generation model in the Falcon family—a more efficient and accessible large language model (LLM) that is trained on a 5.5 trillion token dataset primarily consisting of web data from RefinedWeb with 11 billion parameters. It’s built on causal decoder-only architecture, making it powerful for auto-regressive tasks. It’s equipped with multilingual capabilities and can seamlessly tackle tasks in English, French, Spanish, German, Portuguese, and other languages for diverse scenarios. Falcon 2 11B is a raw, pre-trained model, which can be a foundation for more specialized tasks, and also allows you to fine-tune the model for specific use cases such as summarization, text generation, chatbots, and more. Falcon 2 11B is supported by the SageMaker TGI Deep Learning Container (DLC) which is powered by Text Generation Inference (TGI), an open source, purpose-built solution for deploying and serving LLMs that enables high-performance text generation using tensor parallelism and dynamic batching. The...
Read More
Artificial Intelligence

Implementing Knowledge Bases for Amazon Bedrock in support of GDPR (right to be forgotten) requests

AWS Machine Learning Blog The General Data Protection Regulation (GDPR) right to be forgotten, also known as the right to erasure, gives individuals the right to request the deletion of their personally identifiable information (PII) data held by organizations. This means that individuals can ask companies to erase their personal data from their systems and from the systems of any third parties with whom the data was shared. Amazon Bedrock is a fully managed service that makes foundational models (FMs) from leading artificial intelligence (AI) companies and Amazon available through an API, so you can choose from a wide range of FMs to find the model that’s best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using the Amazon Web Services (AWS) tools without having to manage infrastructure. FMs are trained on vast quantities of data, allowing them to be used to answer questions on a variety of subjects. However, if you want to use an FM to answer questions about your private data that you have stored in your Amazon Simple Storage Service (Amazon S3) bucket, you need to use a technique known as Retrieval Augmented Generation (RAG) to provide relevant answers for your customers. Knowledge Bases for Amazon Bedrock is a fully managed RAG capability that allows you to customize FM responses with contextual and relevant company data. Knowledge Bases for Amazon Bedrock automates the end-to-end RAG workflow, including ingestion, retrieval, prompt augmentation, and citations, so you don’t have to write custom code to integrate data sources and manage queries. Many organizations are building generative AI applications and powering them with RAG-based architectures to help avoid hallucinations and respond to the requests based on their company-owned proprietary data, including...
Read More
Business News

UBS completes historic takeover as Credit Suisse ceases to exist

The Straits Times Business News This closes the book on a bank that played a central role in the development of Switzerland. Go to Source 31/05/2024 - 15:00 / Twitter: @hoffeldtcom
Read More
Business News

Rise in credit card and other unsecured debt in S’pore even as card billings fall

The Straits Times Business News The growth in credit card debt contributed to an increase in personal loans. Go to Source 31/05/2024 - 12:04 / Twitter: @hoffeldtcom
Read More
Psychology

Bridging Policy and Research for Suicide Prevention in the Americas: A Joint PAHO/NIMH Symposium on Suicide Prevention

NIMH News Feed The Pan American Health Organization (PAHO) and the National Institute of Mental Health (NIMH) are organizing a two-day symposium on suicide prevention, a key priority of the Americas' public health agenda. Go to Source 31/05/2024 - 06:03 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Artificial Intelligence

CBRE and AWS perform natural language queries of structured data using Amazon Bedrock

AWS Machine Learning Blog This is a guest post co-written with CBRE. CBRE is the world’s largest commercial real estate services and investment firm, with 130,000 professionals serving clients in more than 100 countries. Services range from financing and investment to property management. CBRE is unlocking the potential of artificial intelligence (AI) to realize value across the entire commercial real estate lifecycle—from guiding investment decisions to managing buildings. The opportunities to unlock value using AI in the commercial real estate lifecycle starts with data at scale. CBRE’s data environment, with 39 billion data points from over 300 sources, combined with a suite of enterprise-grade technology can deploy a range of AI solutions to enable individual productivity all the way to broadscale transformation. Although CBRE provides customers their curated best-in-class dashboards, CBRE wanted to provide a solution for their customers to quickly make custom queries of their data using only natural language prompts. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities to build generative AI applications, simplifying development while maintaining privacy and security. With the comprehensive capabilities of Amazon Bedrock, you can experiment with a variety of FMs, privately customize them with your own data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and create managed agents that run complex business tasks—from booking travel and processing insurance claims to creating ad campaigns and managing inventory—all without the need to write code. Because Amazon Bedrock is serverless, you don’t have to manage infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with. In this...
Read More
Artificial Intelligence

Dynamic video content moderation and policy evaluation using AWS generative AI services

AWS Machine Learning Blog Organizations across media and entertainment, advertising, social media, education, and other sectors require efficient solutions to extract information from videos and apply flexible evaluations based on their policies. Generative artificial intelligence (AI) has unlocked fresh opportunities for these use cases. In this post, we introduce the Media Analysis and Policy Evaluation solution, which uses AWS AI and generative AI services to provide a framework to streamline video extraction and evaluation processes. Popular use cases Advertising tech companies own video content like ad creatives. When it comes to video analysis, priorities include brand safety, regulatory compliance, and engaging content. This solution, powered by AWS AI and generative AI services, meets these needs. Advanced content moderation makes sure ads appear alongside safe, compliant content, building trust with consumers. You can use the solution to evaluate videos against content compliance policies. You can also use it to create compelling headlines and summaries, boosting user engagement and ad performance. Educational tech companies manage large inventories of training videos. An efficient way to analyze videos will help them evaluate content against industry policies, index videos for efficient search, and perform dynamic detection and redaction tasks, such as blurring student faces in a Zoom recording. The solution is available on the GitHub repository and can be deployed to your AWS account using an AWS Cloud Development Kit (AWS CDK) package. Solution overview Media extraction – After a video uploaded, the app starts preprocessing by extracting image frames from a video. Each frame will be analyzed using Amazon Rekognition and Amazon Bedrock for metadata extraction. In parallel, the system extracts audio transcription from the uploaded content using Amazon Transcribe. Policy evaluation – Using the extracted metadata from the video, the system conducts LLM evaluation. This allows you to take advantage of the flexibility...
Read More
Artificial Intelligence

Vitech uses Amazon Bedrock to revolutionize information access with AI-powered chatbot

AWS Machine Learning Blog This post is co-written with Murthy Palla and Madesh Subbanna from Vitech. Vitech is a global provider of cloud-centered benefit and investment administration software. Vitech helps group insurance, pension fund administration, and investment clients expand their offerings and capabilities, streamline their operations, and gain analytical insights. To serve their customers, Vitech maintains a repository of information that includes product documentation (user guides, standard operating procedures, runbooks), which is currently scattered across multiple internal platforms (for example, Confluence sites and SharePoint folders). The lack of a centralized and easily navigable knowledge system led to several issues, including: Low productivity due to lack of an efficient retrieval system and often leads to information overload Inconsistent information access because there was no singular, unified source of truth To address these challenges, Vitech used generative artificial intelligence (AI) with Amazon Bedrock to build VitechIQ, an AI-powered chatbot for Vitech employees to access an internal repository of documentation. For customers that are looking to build an AI-driven chatbot that interacts with internal repository of documents, AWS offers a fully managed capability Knowledge Bases for Amazon Bedrock, that can implement the entire Retrieval Augment Generation (RAG) workflow from ingestion to retrieval, and prompt augmentation without having to build any custom integrations to data sources or manage data flows. Alternatively, open-source technologies like Langchain can be used to orchestrate the end-to-end flow. In this blog, we walkthrough the architectural components, evaluation criteria for the components selected by Vitech and the process flow of user interaction within VitechIQ. Technical components and evaluation criteria In this section, we discuss the key technical components and evaluation criteria for the components involved in building the solution. Hosting large language models Vitech explored the option of hosting Large Language Models (LLMs) models using Amazon Sagemaker. Vitech needed a...
Read More
Management

How to Set and Achieve Goals for Professional Development

15Five Most of your employees want to get better. Whether it comes from an inherent need for self-improvement or the external motivation of a promotion, professional development goals allow managers to chart a course for that improvement in a way that benefits both the employee and the business. Unlike other self-improvement goals, professional development goals are all about moving forward in your career. They make employees better collaborators and more reliable team members. They make managers better resources for their teams. Even your C-Suite can use professional development goals to become better leaders. Professional development goals can be used as part of an official career growth plan—ending in a promotion—or for ongoing personal development. Here’s your full guide to using these goals to build a team of star players. What are professional development goals? A professional development goal has one purpose: to guide a team member toward a specific improvement. You might help a marketer build up their skill set in a particular marketing channel or niche they aren’t yet proficient in. Maybe someone in HR wants to get better at those sometimes uncomfortable 1-on-1 conversations that are part of the job.  These goals are usually framed by a manager or leader to ensure the area being developed benefits the organization as a whole—or at least the team the employee is part of. Performance is tracked over time to make sure everyone’s headed in the right direction. What are good professional development goals? Setting a goal can be as simple as writing a single sentence on a piece of paper that roughly describes something you want to achieve in the future, like “be a better marketer.” But that’s not a particularly good example of a professional development goal. It’s vague, it’s not recorded in a way that can be tracked,...
Read More
Artificial Intelligence

End-to-end LLM training on instance clusters with over 100 nodes using AWS Trainium

AWS Machine Learning Blog Llama is Meta AI’s large language model (LLM), with variants ranging from 7 billion to 70 billion parameters. Llama uses a transformers-based decoder-only model architecture, which specializes at language token generation. To train a model from scratch, a dataset containing trillions of tokens is required. The Llama family is one of the most popular LLMs. However, training Llama models can be technically challenging, prolonged, and costly. In this post, we show you how to accelerate the full pre-training of LLM models by scaling up to 128 trn1.32xlarge nodes, using a Llama 2-7B model as an example. We share best practices for training LLMs on AWS Trainium, scaling the training on a cluster with over 100 nodes, improving efficiency of recovery from system and hardware failures, improving training stability, and achieving convergence. We demonstrate that the quality of Llama 2-7B trained on Trainium is of comparable quality to the open source version on multiple tasks, ranging from multi-task language understanding, math reasoning, to code generation. We also demonstrate the scaling benefits of Trainium. What makes distributed training across over 100 nodes so challenging? Training large-scale LLMs requires distributed training across over 100 nodes, and getting elastic access to large clusters of high-performance compute is difficult. Even if you manage to get the required accelerated compute capacity, it’s challenging to manage a cluster of over 100 nodes, maintain hardware stability, and achieve model training stability and convergence. Let’s look at these challenges one by one and how we address them with Trainium clusters during the end-to-end training: Distributed training infrastructure efficiency and scalability – Training LLMs is both computation and memory intensive. In this post, we show you how to enable the different parallel training algorithms on Trainium and select the best hyperparameters to achieve the highest throughput...
Read More
Business News

American shares tumble 13% after sales strategy backfires; carrier cuts growth

US Top News and Analysis American cut its profit and unit revenue forecast for the second quarter, which coincides with some of the busiest travel periods of the year. Go to Source 29/05/2024 - 15:03 / Twitter: @hoffeldtcom
Read More
Business News

Abercrombie & Fitch shares jump more than 10% as retailer’s torrid growth continues

US Top News and Analysis Abercrombie & Fitch is building on its blockbuster 2023 by growing Hollister and adding new categories to its namesake banner, such as the A&F Wedding Shop. Go to Source 29/05/2024 - 15:03 / Twitter: @hoffeldtcom
Read More
Covid-19

Global pandemic treaty could be more than a year away after deadline missed

Coronavirus | The Guardian Health leaders say extensive negotiations still needed to agree set of measures on how the world should prevent and respond to future pandemicsExplainer: What is the pandemic accord and why have negotiations been so difficult?Global health leaders have said an international treaty governing how the world should deal with future pandemics may not be agreed for another year or more.After two years of negotiations, countries failed to agree on the text of an international pandemic accord by a deadline of 24 May. And at the World Health Assembly in Geneva on Tuesday delegates said extensive further negotiations would be needed. Continue reading... Go to Source 29/05/2024 - 12:06 /Kat Lay, Global health correspondent in Geneva Twitter: @hoffeldtcom
Read More
Artificial Intelligence

Looking for a specific action in a video? This AI-based method can find it for you

MIT News - Artificial intelligence The internet is awash in instructional videos that can teach curious viewers everything from cooking the perfect pancake to performing a life-saving Heimlich maneuver.But pinpointing when and where a particular action happens in a long video can be tedious. To streamline the process, scientists are trying to teach computers to perform this task. Ideally, a user could just describe the action they’re looking for, and an AI model would skip to its location in the video.However, teaching machine-learning models to do this usually requires a great deal of expensive video data that have been painstakingly hand-labeled.A new, more efficient approach from researchers at MIT and the MIT-IBM Watson AI Lab trains a model to perform this task, known as spatio-temporal grounding, using only videos and their automatically generated transcripts.The researchers teach a model to understand an unlabeled video in two distinct ways: by looking at small details to figure out where objects are located (spatial information) and looking at the bigger picture to understand when the action occurs (temporal information).Compared to other AI approaches, their method more accurately identifies actions in longer videos with multiple activities. Interestingly, they found that simultaneously training on spatial and temporal information makes a model better at identifying each individually.In addition to streamlining online learning and virtual training processes, this technique could also be useful in health care settings by rapidly finding key moments in videos of diagnostic procedures, for example.“We disentangle the challenge of trying to encode spatial and temporal information all at once and instead think about it like two experts working on their own, which turns out to be a more explicit way to encode the information. Our model, which combines these two separate branches, leads to the best performance,” says Brian Chen, lead author of a...
Read More
Artificial Intelligence

Controlled diffusion model can change material properties in images

MIT News - Artificial intelligence Researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Google Research may have just performed digital sorcery — in the form of a diffusion model that can change the material properties of objects in images.Dubbed Alchemist, the system allows users to alter four attributes of both real and AI-generated pictures: roughness, metallicity, albedo (an object’s initial base color), and transparency. As an image-to-image diffusion model, one can input any photo and then adjust each property within a continuous scale of -1 to 1 to create a new visual. These photo editing capabilities could potentially extend to improving the models in video games, expanding the capabilities of AI in visual effects, and enriching robotic training data.The magic behind Alchemist starts with a denoising diffusion model: In practice, researchers used Stable Diffusion 1.5, which is a text-to-image model lauded for its photorealistic results and editing capabilities. Previous work built on the popular model to enable users to make higher-level changes, like swapping objects or altering the depth of images. In contrast, CSAIL and Google Research’s method applies this model to focus on low-level attributes, revising the finer details of an object’s material properties with a unique, slider-based interface that outperforms its counterparts.While prior diffusion systems could pull a proverbial rabbit out of a hat for an image, Alchemist could transform that same animal to look translucent. The system could also make a rubber duck appear metallic, remove the golden hue from a goldfish, and shine an old shoe. Programs like Photoshop have similar capabilities, but this model can change material properties in a more straightforward way. For instance, modifying the metallic look of a photo requires several steps in the widely used application.“When you look at an image you’ve created, often the result is...
Read More
Business News

CDL unit buys Delfi Orchard in $439m collective sale

The Straits Times Business News CDL may tap the Urban Redevelopment Authority’s Strategic Development Incentive scheme for Delfi's rejuvenation. Go to Source 28/05/2024 - 15:08 / Twitter: @hoffeldtcom
Read More
Business News

Citi Private Bank sees opportunities in S’pore, region in 2024 as it completes most of job cuts

The Straits Times Business News Asia is expected to see the strongest growth in ultra-high net-worth individuals. Go to Source 28/05/2024 - 12:01 / Twitter: @hoffeldtcom
Read More
Management

This coveted perk could be critical to workforce development efforts

Human Resources News - Human Resources News Headlines | Bizjournals.com As the focus shifts from recruitment to retention in a still-tight hiring market, many employers are searching for the incentives that will retain workers. While perks like four-day workweeks, unlimited vacation and remote work are often in the spotlight, another coveted perk is career development and upskilling opportunities. That's according to The State of Upskilling and Reskilling — a survey by learning management system TalentLMS and human resources suite Workable. The survey found 71%… Go to Source 28/05/2024 - 12:01 /Marq Burnett Twitter: @hoffeldtcom
Read More
Business News

Tories make tax pledge on pensions as business leaders back Labour

BBC News Parties make pledges on pensions and economy as PM and shadow chancellor head to the Midlands. Go to Source 28/05/2024 - 09:03 / Twitter: @hoffeldtcom
Read More
Business News

S’pore payment firm Triple-A adds PayPal’s digital currency in bid to chase growth

The Straits Times Business News Tie-up will go live at end June. Go to Source 28/05/2024 - 09:03 / Twitter: @hoffeldtcom
Read More
Psychology

NIMH Genomics Team 75th Anniversary Webinar: Celebrating Advancements in Psychiatric Genomics

NIMH News Feed As part of the yearlong 75th Anniversary celebration, the National Institute of Mental Health (NIMH) is hosting a webinar to explore key advances in genetics and genomics research. Go to Source 28/05/2024 - 06:04 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Psychology

Workshop: Promoting Mental Health for Sexual and Gender Minority Youth: Evidence-Based Developmental Perspectives

NIMH News Feed This two-day virtual workshop convenes researchers, youth advocates, and federal officials to review the state of the science on developmental trajectories of gender identity and sexuality with a focus on research aimed at the promotion of mental health for sexual and gender minority youth. Go to Source 28/05/2024 - 06:03 /National Institute of Mental Health Twitter: @hoffeldtcom
Read More
Talent Management

Everything About AI for HR Leaders and Talent Acquisition

Everyone's Blog Posts - RecruitingBlogs The snowballing use of generative AI tools such as ChatGPT is constantly rising the AI popularity. The increasing popularity of AI or artificial intelligence is resulting in well-optimized procedures and effective solutions, including HR (human resources). Recruiters are integrating advanced AI tools and technologies to ease their recruitment procedure. According to LinkedIn, 72% of recruiters find AI beneficial for candidate sourcing. And, 60% of companies adopt AI for talent management as of 2024, according to McKinsey reports. Let’s discover how AI can impact the overall talent acquisition process. What Role Does AI Play in Talent Acquisition? The role of AI software in the talent acquisition process can be summarized in two words as given below – Data analysis AI software analyzes vast amounts of data within less time and determines whether or not each applicant is equipped with essential skills, qualifications, and experience. HR leaders can sort hundreds of submitted resumes using AI software and seamlessly find the right applicant based on the job description.   Efficiency of multiple processes AI can streamline multiple processes of talent acquisition by automating them. For example, finding the most qualified candidate, scheduling their interviews, examining their profiles, and doing predictive analysis about whether they will work for their organization for longer. Benefits of AI Integration with Talent Acquisition Using the integrated approach of HR and AI-based talent acquisition software can automate talent acquisition and applicant experience with no or limited human intervention. This combined approach and automation can provide multiple benefits – Effective job description and hiring accuracy The use of Generative AI in HRMS (human resources management system) provides appropriate job profiles within a few minutes or hours. AI-powered software can match job prerequisites and qualifications with the experience and skills of candidates. AI can evaluate the...
Read More
Psychology

Some birds may use ‘mental time travel,’ study finds

PsycPORT™: Psychology Newswire While episodic memory is integral to how most people experience the world, it can be difficult for scientists to prove whether nonhuman animals share this ability. Go to Source 24/05/2024 - 21:04 / Twitter: @hoffeldtcom
Read More
1 3 4 5 6 7 42

The messages, the text and the photo is belonging to the one who sends out the RSS feed or related to the sender.

error: Content is protected !!