Cricket Australia’s culture problem is it still doesn’t think fans are stakeholders in the game


David R. Gallagher, The University of Queensland

The most telling part of the long-awaited review into the rotten culture of elite Australian cricket is what it doesn’t say. Or more correctly, what it does say, but what the establishment that owns and controls professional cricket won’t let us read.

Even a chunk of the executive summary of the report is redacted, like state secrets from a confidential intelligence dossier.


First page of the executive summary of the report Australian Cricket: A Matter of Balance.
Cricket Australia

A further 22 other pages in the report also have redacted material, in some parts quite extensive. There may be other reasons for the dozens of redactions, but it’s hard not to conclude the principle motivation is that some criticisms of Cricket Australia, and of specific individuals, are just too close to the bone.


Second page of the executive summary of the report Australian Cricket: A Matter of Balance.
Cricket Australia

The redactions are at odds with the “complete transparency” talked about by Cricket Australia chairman David Peever.

They are emblematic of Cricket Australia’s lack of accountability to the game’s most important stakeholders – the cricketing public.

A very public scandal

The report stems from the ball-tampering scandal in March 2018, when the leaders of the Australian men’s cricket team were involved in a brazen attempt to cheat during a match with South Africa. Three players, including captain Steve Smith and vice-captain Dave Warner, were given unprecedented 12-month suspensions.

Cricket Australia then commissioned the respected Ethics Centre to conduct an independent review covering “cultural, organisational and/or governance issues” related to cricket’s administration.




Read more:
Australian cricket’s wake-up call on a culture that has cost it dearly


The investigation has spanned the entire organisation (including the member state associations that are essentially the shareholders of Cricket Australia). It has looked at selection processes, values, leadership and the financial arrangements involving players, sponsors and broadcasters.

Sins of omission

The report says the leadership of Cricket Australia should accept responsibility for several failures. We don’t know what the first failure is, because it has been redacted. But the second is an “inadvertent (but foreseeable) failure to create and support a culture in which the will-to-win was balanced by an equal commitment to moral courage and ethical restraint”.

The review does – as far as we can tell – save Cricket Australia from blame for promoting a “win at all costs” culture. But it levels a charge almost as serious.

In our opinion, CA’s fault is not that it established a culture of ‘win at all costs’. Rather, it made the fateful mistake of enacting a program that would lead to ‘winning without counting the costs’.

It is this approach that has led, inadvertently, to the situation in which cricket finds itself today – for good and for ill.

A series of unfortunate events

Several significant and controversial decisions were made in the weeks prior to the report’s delayed release. After a “global search”, the board appointed Cricket Australia insider Kevin Roberts to replace retiring chief executive James Sutherland. It then re-appointed long-time board chairman David Peever for a further three-year term.



These decisions suggest Cricket Australia’s highest echelons just aren’t taking responsibility. Doesn’t the buck stop with the chairman and board? How can a significant review finding cultural problems across the entire structure not lead to any meaningful changes in its leadership and governance?




Read more:
Cricket Australia’s culture sore: captains of the finance industry should take note


Notable rejections

Cricket Australia says it will adopt most of the independent review’s 42 recommendations. It accepts “sin bin” measures for cricketer bad behaviour, that annual cricketer awards take into account sportsmanship and character, establishing an ethics commission to strengthen accountability, and to finally include sledging in an anti-harassment code.

There are, however, two notable rejections.

One is that, “subject to issues of confidentiality (commercial and otherwise)” the board publish the minutes of its meetings, as is done by the Board for Control of Cricket in India. Another cross marked here against transparency.

Operating in a parallel universe

All this points to a critical problem with Cricket Australia’s governance and leadership.

On page 13, the report includes a definition of cricket’s stakeholders: “All parties who hold a stake in the success of CA and Cricket-in-Australia (the general public was not included in the scope for research).”

This seems to sum up Cricket Australia’s attitude perfectly: it pays lip service to the fans, but in practice treats them as a cash cow, not real stakeholders.

Cricket is a sporting monopoly, like other sporting codes. Cricket Australia is a company limited by guarantee, and owned by the state and territory associations. It controls the game as a lucrative business. The general public might love the game, but we have no ownership or direct influence over it.

The only means we might have to effect meaningful reform is by voting with our feet.The Conversation

David R. Gallagher, Malcolm Broomhead Chair in Finance, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Australian cricket’s wake-up call on a culture that has cost it dearly



File 20181029 76408 kbfqrp.jpg?ixlib=rb 1.1
An independent report has found that Australian cricket’s culture of win-at-all-costs has come at a high price.
AAP/Dean Lewins

Steve Georgakis, University of Sydney

On the face of it, there is much to celebrate about Australian cricket right now. The sport has money to burn thanks to recent large pay-TV deals, the Big Bash has strong TV ratings; the women’s cricket team is one of the most successful and highly regarded Australian sporting teams. There has been a surge in junior participation, and many Australians still regard it as our national sport – it is certainly the dominant sport of the summer.

By contrast, cricket in the media is lurching from one crisis to another, with the fallout from the now-infamous ball-tampering episode still reverberating. Yesterday, the sport was dealt another blow.

An independent review of Cricket Australia’s culture was released, concluding that “winning without counting the costs” was largely responsible for the recent ball-tampering scandal and sledging – the on-field verbal abuse and taunting – that has been an entrenched habit of the Australian team for decades.




Read more:
Just not cricket: why ball tampering is cheating


The review found the sport was riddled with cultural problems that exerted so much exerted pressure to win that it manifested in cheating and sledging, covertly sanctioned by the administrators. With 42 recommendations, it is clear Cricket Australia needs to change.

The biggest cultural issue for the sport at the moment is sledging, and one of the recommendations calls for cricket’s anti-harrassment code to address abusive behaviour.

While niggling a player might have been an acceptable part of the game, sledging has reached a point where a strict code of ethics needs to be drawn up and adhered to. While this may take away some of the unique nature of the sport, in the long run it will bring the focus back to the play.

Sledging matters because it is a type of cheating. The rise in cheating, whether it be via match-fixing or sledging, is linked to the rise in commercialisation and gambling in sport.

Australian cricket has formed commercial relationships with major sport betting agencies and an official partnership with Bet365. There have been numerous accusations regarding international match-fixers. And the ball-tampering scandal has confirmed that Australians no longer hold the moral high ground.

The relentless pressure to win has infected the sport at all levels. Sledging is but one of the symptoms.

Why does this matter so much to Australians? A clue may lie in what else is going on: a recent bank inquiry, fears related to immigration, contempt for politicians, growing distrust of public institutions, poor performance and declines in international educational testing, wage stagnation and declines (despite a world record run on economic growth), concern over high house prices and power bills. In 2018, Australians have much to be anxious and angry about.




Read more:
Can the cricketers banned for ball tampering ever regain their hero status? It’s happened before


Cricket has always been held above everyday concerns, and has been a source of national pride and a salve in times of fear. Throughout the history of colonial Australia, cricket has been a source of inspiration, an institution that has provided strong links to our communities (school, geographical district, state and territories) and helped define Australian identity.

Modern Australians’ first organised sport in both schools and the community setting was cricket. Our first sporting wins against the “home country” – England – were in cricket Test matches. Cricket was responsible for giving us legitimacy.

Our best cricketers became heroes. Generations looked upon Don Bradman, Dennis Lillee, Steve Waugh and Mark Taylor as role models. It was the hegemonic sport and the cricketers best represented what Australian masculinity was about. If you played the sport, you played in a tough way (even though it is not a contact sport) within the highly revered rules; cricket could help you learn that gracious defeat is as admirable as victory.

Also, especially from about 1990 onwards, the Australian men’s team was outstanding. They won Test series and one-dayers with continued all-round brilliance, producing some of the greatest players the game has ever seen.

But, in the past few years, Australian cricket’s legitimacy has waned. Many of us who love the sport and all it represents have felt disillusioned by recent events at the elite level. This was confirmed to us yesterday with the report – which, thankfully, did not sugar-coat the diagnosis.

So what is the cure? A revised, strict and well-policed ethical code for staff and players will not be enough. Cricket needs to work with commercial partners, or abandon them if they can’t meet high ethical standards too.

Commercial and betting agendas create pressures to cheat, yet Sport 2030, our national plan for sport, under the banner “strengthening the sector’s integrity” suggests:

organisations… adopt a more efficient, model of governance which can best position sports to be able to drive greater commercial outcomes, reduce reliance on funding, increase autonomy and support innovation.

And there is the dilemma.The Conversation

Steve Georgakis, Senior Lecturer of Pedagogy and Sports Studies, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Health Check: how to tell the difference between hay fever and the common cold



File 20181029 7053 xe74sg.jpg?ixlib=rb 1.1
Both make you sneeze and give you a runny nose.
Shutterstock/michaelheim

Reena Ghildyal, University of Canberra and Cynthia Mathew, University of Canberra

You wake up with a runny nose and, come to think of it, you’ve been sneezing more than usual. It feels like the start of a cold but it’s October – the start of hay fever season – so what is the more likely affiliation?

Hay fever and colds are easy to confuse because they share the clinical category of rhinitis, which means irritation and inflammation of the nasal cavity.

The mechanisms share some similarities too, but there are some key differences in symptoms – notably, itchiness and the colour of your snot.




Read more:
Health Check: what is the common cold and how do we get it?


Similar mechanisms

The common cold is a viral infection of the upper respiratory tract, usually caused by rhinoviruses. Colds spread easily from one person to the other via coughing, sneezing and touching infected surfaces.

Hay fever, on the other hand, can’t spread from person to person. It’s an allergic response to an environmental irritant such as pollen or dust.

The nasal cavity contains cells that recognise foreign substances such as bugs and pollen. Once the body detects a bug or irritant, it activates an army of T cells that hunt down and destroy the substance. This is known as an immune response.

In hay fever, the irritant triggers the same immune cells as viruses. But it also causes the release of IgE antibodies and histamines to produce an ongoing blocked nose, impaired sense of smell, and nasal inflammation.


Sign up for the newsletter

Sign up to Thrive, a weekly dose of evidence to help you live well.


How you tell the difference

Both hay fever and the common cold causes sneezing, runny or stuffy nose and coughing.

One of the key differences is the colour of the nasal discharge (your snot): it’s more likely to be yellowish/green in colour in colds; while in hay fever, it’s clear.




Read more:
Curious Kids: Why does my snot turn green when I have a cold?


Facial itchiness – especially around the eyes or throat – is a symptom typically only seen with hay fever.

If someone is allergic to a seasonal environmental trigger such as pollen, their symptoms may be restricted to particular seasons of the year. But if you’re allergic to dust or smoke, symptoms may last all year long.

Hay fever, like asthma, is an allergic disease and can sometimes cause similar symptoms, such as coughing, wheezing and shortness of breath.

A sore throat, on the other hand, is generally a precursor to cold. If you have cold-like symptoms and a sore throat or have had one in the last few days, your condition is more likely to be the common cold.

If your throat is sore, it’s probably the start of a cold.
nito/Shutterstock

What if you’ve never had hay fever before?

You’re more likely to catch viral infections during winter when more bugs are circulating, but it’s possible to catch a cold any time of the year.

It’s possible to develop hay fever in adulthood. This may be due to genetic predisposition that manifests only when certain other contributing factors are present, such as a high level of airborne pollen. Or it may be due to a major change in lifestyle, such as a move to a different location or change in diet.

Most adults will get two to three colds per year, while hay fever affects nearly one in five Australians.

Around 10-20% of hay fever sufferers grow out of hay fever at some point in their lives and about half find their symptoms get less severe as they get older, which means that for the majority of sufferers, hay fever can last a long time.

How are they treated?

An allergy test, using a skin prick or blood test, for allergen-specific IgE could inform you of the specific irritants that trigger your condition. These tests can be organised through your GP or pharmacist.




Read more:
Health Check: what are the options for treating hay fever?


Oral antihistamines are effective in hay fever patients with mild to moderate disease, particularly in those whose main symptoms are palatal itch, sneezing, rhinorrhoea, or eye symptoms hay fever treatments.

Generally, treatment isn’t necessary for a cold but over-the-counter medications such as paracetamol and ibuprofen can help relieve some of the symptoms.The Conversation

Reena Ghildyal, Associate Professor in Biomedical Sciences, University of Canberra and Cynthia Mathew, PhD student, Sessional Tutor, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Five lifestyle changes to enhance your mood and mental health



File 20181009 72130 11c1aeh.jpg?ixlib=rb 1.1
Getting a good dose of nature can boost your mental health.
Marion Michelle

Jerome Sarris, Western Sydney University and Joe Firth, Western Sydney University

When someone is diagnosed with a mental health disorder such as depression or anxiety, first line treatments usually include psychological therapies and medication. What’s not always discussed are the changeable lifestyle factors that influence our mental health.

Even those who don’t have a mental health condition may still be looking for ways to further improve their mood, reduce stress, and manage their day-to-day mental health.

It can be empowering to make positive life changes. While time restrictions and financial limitations may affect some people’s ability to make such changes, we all have the ability to make small meaningful changes.




Read more:
Stroke, cancer and other chronic diseases more likely for those with poor mental health


Here are five lifestyle changes to get you started:

1. Improve your diet and start moving

Wholefoods such as leafy green vegetables, legumes, wholegrains, lean red meat and seafood, provide nutrients that are important for optimal brain function. These foods contain magnesium, folate, zinc and essential fatty acids.

Foods rich in polyphenols, such as berries, tea, dark chocolate, wine and certain herbs, also play an important role in brain function.




Read more:
Health Check: seven nutrients important for mental health – and where to find them


In terms exercise, many types of fitness activities are potentially beneficial – from swimming, to jogging, to lifting weights, or playing sports. Even just getting the body moving by taking a brisk walk or doing active housework is a positive step.

Activities which also involve social interaction and exposure to nature can potentially increase mental well-being even further.

General exercise guidelines recommend getting at least 30 minutes of moderate activity on most days during the week (about 150 minutes total over the week). But even short bouts of activity can provide an immediate elevation of mood.

2. Reduce your vices

Managing problem-drinking or substance misuse is an obvious health recommendation. People with alcohol and drug problems have a greater likelihood than average of having a mental illness, and have far poorer health outcomes.

Some research has shown that a little alcohol consumption (in particular wine) may have beneficial effects on preventing depression. Other recent data, however, has revealed that light alcohol consumption does not provide any beneficial effects on brain function.

Stopping smoking is also an important step, as nicotine-addicted people are constantly at the mercy of a withdrawal-craving cycle, which profoundly affects mood. It may take time to address the initial symptoms of stopping nicotine, but the brain chemistry will adapt in time.

Quitting smoking is associated with better mood and reduced anxiety.

3. Prioritise rest and sleep

Sleep hygiene techniques aim to improve sleep quality and help treat insomnia. They including adjusting caffeine use, limiting exposure to the bed (regulating your sleep time and having a limited time to sleep), and making sure you get up at a similar time in the morning.




Read more:
Health Check: five ways to get a better night’s sleep


Some people are genetically wired towards being more of a morning or evening person, so we need to ideally have some flexibility in this regard (especially with work schedules).

It’s also important not to force sleep – if you can’t get to sleep within around 20 minutes, it may be best to get up and focus the mind on an activity (with minimal light and stimulation) until you feel tired.

The other mainstay of better sleep is to reduce exposure to light – especially blue light from laptops and smartphones – prior to sleep. This will increase the secretion of melatonin, which helps you get to sleep.

Getting enough time for relaxation and leisure activities is important for regulating stress. Hobbies can also enhance mental health, particularly if they involve physical activity.

4. Get a dose of nature

When the sun is shining, many of us seem to feel happier. Adequate exposure to sunshine helps levels of the mood-maintaining chemical serotonin. It also boosts vitamin D levels, which also has an effect on mental health, and helps at the appropriate time to regulate our sleep-wake cycle.

The benefits of sun exposure need to be balanced with the risk of skin cancer, so take into account the recommendations for sun exposure based on the time of day/year and your skin colour.

You might also consider limiting your exposure to environmental toxins, chemicals and pollutants, including “noise” pollution, and cutting down on your mobile phone, computer and TV use if they’re excessive.

An antidote to this can be simply spending time in nature. Studies show time in the wilderness can improve self-esteem and mood. In some parts of Asia, spending time in a forest (known as forest bathing) is considered a mental health prescription.




Read more:
Hug a tree – the evidence shows it really will make you feel better


A natural extension of spending time in flora is also the positive effect that animals have on us. Research suggests having a pet has many positive effects, and animal-assisted therapy (with horses, cats, dogs, and even dolphins) may also boost feelings of well-being.

5. Reach out when you need help

Positive lifestyle changes aren’t a replacement for medication or psychological therapy but, rather, as something people can undertake themselves on top of their treatment.

While many lifestyle changes can be positive, some changes (such as avoiding junk foods, alcohol, or giving up smoking) may be challenging if being used as a psychological crutch. They might need to be handled delicately, and with professional support.

Strict advice promoting abstinence, or a demanding diet or exercise regime, may cause added suffering, potentially provoking guilt if you can’t meet these expectations. So go easy on yourself.

That said, take a moment to reflect how you feel mentally after a nutritious wholefood meal, a good night’s sleep (free of alcohol), or a walk in nature with a friend. `The Conversation

Jerome Sarris, Professor; NHMRC Clinical Research Fellow; NICM Health Research Institute Deputy Director, Western Sydney University and Joe Firth, Senior Research Fellow at NICM Health Research Institute, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Stop worrying and trust the evidence: it’s very unlikely Roundup causes cancer


File 20181008 72103 8as2pz.jpg?ixlib=rb 1.1
Roundup is the most common weed killer used worldwide.
from shutterstock.com

Ian Musgrave, University of Adelaide

The common weed killer Roundup (glyphosate) is back in the news after a US court ruled it contributed to a man’s terminal cancer (non-Hodgkin lymphoma). Following the court’s order for manufacturer Monsanto to compensate the former school ground’s keeper US$289 million, more than 9,000 people are reportedly also suing the company.

In light of this, Cancer Council Australia is calling for Australia to review glyphosate’s safety. And tonight’s Four Corner’s report centres around Monsanto’s possible cover-up of the evidence for a link between glyphosate and cancer.

Juries don’t decide science, and this latest court case produced no new scientific data. Those who believe glyphosate causes cancer often refer to the 2015 report by the International Agency for Research on Cancer (IARC) that classified the herbicide as “probably carcinogenic to humans”.

IARC’s conclusion was arrived at using a narrower base of evidence than other recent peer-reviewed papers and governmental reviews. Australia’s regulator, the Australian Pesticides and Veterinary Medicines Authority (APVMA), reviewed the safety of glyphosate after IARC’s determination. It’s 2016 report concluded that

based on current risk assessment the label instructions on all glyphosate products – when followed – provides adequate protection for users.

The Agricultural Health Study, which followed more than 50,000 people in the US for over ten years, was published in 2018. This real world study in the populations with the highest exposure to glyphosate showed that if there is any risk of cancer from glyphosate preparations, it is exceedingly small.

It also showed that the risk of non-Hodgkin lymphoma is negligible. It is unclear to what extent this study was used in the recent court case.

What did the IARC and others find?

Glyphosate is one of the most used herbicides worldwide. It kills weeds by targeting a specific pathway (the shikimic acid pathway) that exists in plants and a type of bacteria (eubacteria), but not animals (or humans).

In terms of short-term exposure, glyphosate is less toxic than table salt. However, it’s chronic, or long-term, exposure to glyphosate that’s causing the controversy.

Pesticides and herbicides are periodically re-evaluated for their safety and several studies have done so for glyphosate. For instance, in 2015, Germany’s Federal Institute for Risk Assessment suggested glyphosate was neither mutagenic nor carcinogenic.

But then came the IARC’s surprising classification. And the subsequent 2015 review by the European Food Safety Authority, that concluded glyphosate was unlikely to pose a carcinogenic hazard, didn’t alleviate sceptics.

The key differences between the IARC’s and other reports revolve around the breadth of evidence considered, the weight of human studies, consideration of physiological plausibility and, most importantly, risk assessment. The IARC did not take into account the extent of exposure to glyphosate to establish its association with cancer, while the others did.




Read more:
Council workers spraying the weed-killer glyphosate in playgrounds won’t hurt your children


Demonstrating the mechanism

Establishing whether a chemical can cause cancer in humans involves demonstrating a mechanism in which it can do so. Typical investigations examine if the chemical causes mutations in bacteria or damage to the DNA of mammalian cells.

The studies reviewed by IARC, and the other bodies mentioned, that looked at glyphosate’s ability to produce mutations in bacteria and to mammalian cells were negative. The weight of evidence also indicated glyphosate was unlikely to cause significant DNA damage.

Animal studies

Animal studies are typically conducted in rats or mice. The rodents are given oral doses of glyphosate for up to 89% of their life spans, at concentrations much higher than humans would be exposed to.

Studies examined by the European Food Safety Authority included nine rat studies where no cancers were seen. Out of five mouse studies, three showed no cancers even at the highest doses. One study showed tumours, but these were not dose dependent (suggesting random variation, not causation) and in one study tumours were seen at highest doses in males only.

Glyphosate works by disrupting a pathway that exists in plants but not animals or humans.
from shutterstock.com

This led to the European Food Safety Authority’s overall conclusion that glyphosate was unlikely to be a carcinogenic hazard to humans.

The IARC evaluation included only six rat studies. In one study, cancer was seen but this wasn’t dose dependent (again suggesting random variation). They evaluated only two mouse studies, one of which was negative for cancer and that showed a statistically significant “trend” in males.

The IARC thus concluded there was sufficient evidence of carcinogenicity in animals but there was no consistency in tumour type (mouse vs rat) or location.




Read more:
Are common garden chemicals a health risk?


Human studies

This is an enormous field so I can only briefly summarise the research. The European Food Safety Authority looked at 21 human studies and found no evidence for an association between cancer and glyphosate use. The IARC looked at 19 human trials and found no statistically significant evidence for an association with cancer. It did find three small studies that suggested an association with non-Hodgkin lymphoma (not statistically significant).

As already mentioned, the large Agricultural Health Study found no association between cancer and glyphosate in humans. And the 2016 review by Australia’s regulator concluded glyphosate was safe if used as directed.

It’s possible the animus towards Monsanto and genetically modified organisms may have influenced the recent juries’ decision far more than any science. However, these materials had no impact on the scientific findings.The Conversation

Ian Musgrave, Senior lecturer in Pharmacology, University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Just because you’re thin, doesn’t mean you’re healthy



File 20180921 129844 1a475xd.jpg?ixlib=rb 1.1
Being thin doesn’t mean you can eat unhealthy foods and get away with it.
from http://www.shutterstock.com

Dominic Tran, University of Sydney

According to the Australian Institute of Health and Welfare, 63% of Australian adults are overweight or obese.

But it’s much harder to estimate how many are within a healthy weight range but have poor diets or sedentary lifestyles. These can cause significant health problems that will often be missed because the person appears to look “healthy”.




Read more:
I’m not overweight, so why do I need to eat healthy foods?


How do we judge the health of weight?

Obesity statistics often take estimates of body fat using body mass index (BMI). Although BMI isn’t perfectly correlated with body fat percentage, it’s a quick and easy method for collecting data using just the person’s height and weight. If the BMI is higher than 25, a person is considered “overweight”. If it’s above 30, they’re considered “obese”. But BMI doesn’t tell us how healthy someone is on the inside.

Using additional lifestyle measures, such as diet and exercise frequency over the last year, a recent report from the Queensland Health department estimated 23% of those who are not currently overweight or obese are at risk of being so in the future.

These figures indicate that the percentage of unhealthy-weight individuals does not accurately capture the percentage of unhealthy-lifestyle individuals, with the latter number likely to be much higher.




Read more:
We asked five experts: is BMI a good way to tell if my weight is healthy?


If you’re not overweight, does a healthy lifestyle matter?

Many people think if they’re able to stay lean while eating poorly and not exercising, then that’s OK. But though you might appear healthy on the outside, you could have the same health concerns as overweight and obese individuals on the inside.

When considering risk factors associated with heart disease and stroke or cancer, we often think about health indicators such as smoking, cholesterol, blood pressure, and body weight. But poor diet and physical inactivity also each increase the risk for heart disease and have a role to play in the development of some cancers.

So even if you don’t smoke and you’re not overweight, being inactive and eating badly increases your risk of developing heart disease.

Little research has been done to compare the risk diet and exercise contributes to the development of heart disease in overweight versus skinny but unhealthy individuals. However, one study measured the risk of different lifestyle factors associated with complications following acute coronary syndrome – a sudden reduction in blood flow to the heart.

It found adherence to a healthy diet and exercise regime halved the risk of having a major complication (such as stroke or death) in the six months following the initial incident compared with non-adherence.

Unhealthy diets are bad for your body, but what about your brain?

Recent research has also shown overconsumption of high-fat and high-sugar foods may have negative effects on your brain, causing learning and memory deficits. Studies have found obesity is associated with impairments in cognitive functioning, as assessed by a range of learning and memory tests, such as the ability to remember a list of words previously presented some minutes or hours earlier.

Notably, this relationship between body weight and cognitive functioning was present even after controlling for a range of factors including education level and existing medical conditions.

Of particular relevance to this discussion is the growing body of evidence that diet-induced cognitive impairments can emerge rapidly — within weeks or even days. For example, a study conducted at Oxford University found healthy adults assigned to a high-fat diet (75% of energy intake) for five days showed impaired attention, memory, and mood compared to a low-fat diet control group.

Another study conducted at Macquarie University also found eating a high-fat and high-sugar breakfast each day for as little as four days resulted in learning and memory deficits similar to those observed in overweight and obese individuals.

These findings confirm the results of rodent studies showing specific forms of memories can be impaired after only a few days on a diet containing sugar water and human “junk” foods such as cakes and biscuits.

Body weight was not hugely different between the groups eating a healthy diet and those on high fat and sugar diets. So this shows negative consequences of poor dietary intake can occur even when body weight has not noticeably changed. These studies show body weight is not always the best predictor of internal health.

We still don’t know much about the mechanism(s) through which these high-fat and high-sugar foods impair cognitive functioning over such short periods. One possible mechanism is the changes to blood glucose levels from eating high-fat and high-sugar foods. Fluctuations in blood glucose levels may impair glucose metabolism and insulin signalling in the brain.

Many people use low body weight to excuse unhealthy eating and physical inactivity. But body weight is not the best indicator of internal well-being. A much better indicator is your diet. When it comes to your health, it’s what’s on the inside that counts and you really are what you eat.The Conversation

Dominic Tran, Postdoctoral Research Associate, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Daily low-dose aspirin doesn’t reduce heart-attack risk in healthy people


File 20180914 177935 49hb99.jpg?ixlib=rb 1.1
For decades, doctors have been prescribing low-dose aspirin for healthy people over the age of 70.
from shutterstock.com

John McNeil, Monash University

Taking low-dose aspirin daily doesn’t preserve good health or delay the onset of disability or dementia in healthy older people. This was one finding from our seven-year study that included more than 19,000 older people from Australia and the US.

We also found daily low-dose aspirin does not prevent heart attack or stroke when taken by elderly people who hadn’t experienced either condition before. However it does increase the risk of major bleeding.

It has long been established that aspirin saves lives when taken by people after a cardiac event such as a heart attack. And it had been apparent since the 1990s there was a lack of adequate evidence to support the use of low-dose aspirin in healthy older people. Yet, many healthy older people continued being prescribed aspirin for this purpose.




Read more:
How Australians Die: cause #1 – heart diseases and stroke


With the growing proportion of elderly people in our community, a major focus of preventive medicine is to maintain the independence of this age-group for as long as possible. This has increased the need to resolve whether aspirin in the healthy elderly actually prolongs their good health.

Published in the New England Journal of Medicine today, the ASPirin in Reducing Events in the Elderly (ASPREE) trial was the largest and most comprehensive clinical trial conducted in Australia. It compared the effects of aspirin and a placebo in people over the age of 70 without a medical condition that required aspirin.

Our findings mean millions of healthy people over the age of 70, and their doctors, will now know daily aspirin is not the answer to prolonging good health.

Why aspirin for prevention?

Aspirin was first synthesised in 1898. Since the 1960s it has been known that aspirin lowers the risk of heart attack and stroke among those who have had heart disease or stroke before. This is referred to as secondary prevention.




Read more:
Weekly Dose: aspirin, the pain and fever reliever that prevents heart attacks, strokes and maybe cancer


This effect has been attributed to aspirin’s ability to prevent platelets from clumping together and obstructing blood vessels – sometimes referred to as “thinning the blood”.

It had been assumed this protective action could be extrapolated to people who were otherwise healthy to prevent a first heart attack or stroke (known as primary prevention). A number of early primary prevention trials in middle-aged people appeared to confirm this view.

However more recent trials, including the ASCEND trial in diabetes and the ARRIVE trial in younger high-risk individuals, have thrown doubt on this proposition.

Aspirin is known for its blood-thinning properties, which can also increase the risk of bleeding.
from shutterstock.com

In older people, any effect of aspirin on reducing heart disease or stroke might be expected to be enhanced because of their higher underlying risk. But aspirin’s adverse effects (mainly bleeding) might also be increased as older people are at higher risk of bleeding.

The balance between risks and benefits in this age group was previously quite unclear. This was also recognised in various clinical guidelines for aspirin use, which specifically acknowledged the lack of evidence in people older than 70.

The ASPREE trial

A trial of aspirin in the elderly was first called for in the early 1990s. But since aspirin was off patent, there was little prospect of securing industry funding to support a large trial. But controversy arising around the use of aspirin for primary prevention in the mid 2000s led to Monash University receiving initial funding from the National Health and Medical Research Council.

Funding in Australia was only a part of that required to establish a trial the size and complexity of ASPREE. A grant from the US National Institute on Ageing (and subsequently from the US National Cancer Institute) made the study become feasible.

Another challenge was recruiting the necessary thousands of older volunteers who were healthy and living and often working in their community. Unlike most studies, we required participants who weren’t in hospital or sick.




Read more:
Both statins and a Mediterranean-style diet can help ward off heart disease and stroke


This was addressed with the assistance of more than 2,000 GPs who collaborated with the research team supporting recruitment of their patients and overseeing their health. In Australia, 16 sites were established across south-eastern Australia, Tasmania, Victoria, the ACT and southern NSW, to localise study activity and host community events that kept our volunteers updated and involved.

ASPREE is the first major prevention trial to use disability-free survival as the primary health measure. Disability-free survival provides a single integrated measure of whether an intervention such as aspirin provides net benefit. The rationale is that there is little point for elderly people to be taking a preventive medication unless it preserves good health and unless benefits of the medication outweigh any adverse effects.

Large-scale preventive health studies like ASPREE will become increasingly important to help keep an ageing population fit, healthy, out of hospital and living independently. As new preventive opportunities arise they will typically require large clinical trials, and the structure of the Australian health system has proven an ideal setting for this type of study.

Other results from the ASPREE trial will continue to appear for some time. These will describe longer-term effects of daily low dose aspirin on issues such as dementia and cancer. It will also provide valuable information about other strategies to promote healthy ageing well into the future.The Conversation

John McNeil, Professor, Head of School of Public Health & Preventive Medicine, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Health Check: what are nightshade vegetables and are they bad for you?



File 20180822 149469 n5ilz5.jpg?ixlib=rb 1.1
No, these veggies are not trying to kill you.
from http://www.shutterstock.com

Duane Mellor, Coventry University and Nenad Naumovski, University of Canberra

If you get your health news from blogs such as Goop and Dr Oz, you might be led to believe a certain group of vegetables called “nightshade vegetables” are bad for you.

The theory goes that members of the plant family Solanaceae, which includes tomatoes, capsicums, chilli peppers, eggplant and potatoes, contain toxins designed to stop us from eating them, which are damaging to our health.

This idea comes from the fact that poisonous berries called “nightshade” are also in the solanaceae family. But that doesn’t mean all plants in this family are toxic, and the nutrient-rich solanaceae vegetables are the building blocks of some of the most healthy dietary patterns on the planet (such as the Mediterranean diet).




Read more:
Food as medicine: why do we need to eat so many vegetables and what does a serve actually look like?


The “toxins” in these vegetables that some have claimed to be the problem are compounds called lectins. Lectins are proteins, the stuff that meat is made up of, or enzymes that exist in many foods and our bodies. They’re slightly different to the proteins in meat and muscle as they have sugars attached to them meaning they can bind cells together.

It’s thought by those who believe lectins are harmful, that they stick the cells in our body together, causing potential damage and pain, such as arthritis. However, the simple act of cooking helps to break down these lectins and the minute risk of any negative action can be easily deactivated.

The other key point is levels in foods vary, while some foods contain them in high quantities (such as kidney beans, which should be eaten cooked), quantities are very low in food we would eat raw such as tomatoes and capsicums.

But aren’t these chemicals designed to stop us eating them?

It’s sometimes thought the reason plants make lectins is to stop them being eaten, and that because of this they must cause us harm. One claim is that they cause inflammation, worsening arthritis. But in our recent review of the research there was little evidence for this.

The evidence that exists on arthritis and other forms of disease related to inflammation (including heart disease) supports the role of the Mediterranean-type diet. This is based on vegetables, including those from the solanaceae family.

It’s a myth compounds plants produce to stop them being eaten are harmful to us.
linh pham unsplash



Read more:
Health Check: are microgreens better for you than regular greens?


It’s also a myth that compounds plants produce to stop them being eaten are harmful to us. Increasingly there’s evidence many of these compounds can have beneficial effects. Polyphenols, which are bitter chemicals found in a range of fruit and vegetables to stop them being eaten have been shown to reduce the risk of heart disease and stroke, and maybe even dementia.

Although there are no apparent benefits of lectins, and some theoretical potential of harms (cells sticking together in a test tube, or vomiting after eating raw kidney beans) these naturally occurring chemicals can be easily broken down by cooking.

So, on the plate, lectins are not an issue. And the so-called “nightshade vegetables” have numerous other reasons why they’re benefical for heath, from vitamins and minerals through to fibre and polyphenols. The key is to eat as wide a variety of fruit and vegetables as possible to maximise these health improving factors.

What’s in a name?

The favourite of health bloggers and “superfood” proponents is the goji berry. But surprise, this too belongs to the nightshade family.

Goji berries have been claimed to treat dry skin, promote longevity and even improve sexual desire. Some of these claims might be related to its historical use as a traditional Chinese treatment.

It does contain vitamins A and C, so it’s not devoid of any nutritional value. But any claims of health benefits above and beyond any other kind of berry are currently unproven.

So the message here? Don’t worry too much about which fruits or vegetables famous bloggers or TV doctors tell you to eat or not to eat. Enjoy them all, be sure to get your “two and five” serves, and store and wash them properly before you tuck in.




Read more:
Health Check: can chopping your vegetables boost their nutrients?


The Conversation


Duane Mellor, Senior Lecturer in Human Nutrition, Coventry University and Nenad Naumovski, Asistant Professor in Food Science and Human Nutrition, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Belly fat is the most dangerous, but losing it from anywhere helps



File 20180828 86141 1l7yo71.jpg?ixlib=rb 1.1
We can’t target certain areas for weight loss, but losing it from anywhere is good.
from http://www.shutterstock.com

Evelyn Parr, Australian Catholic University

Excess storage of fat is linked to many different chronic diseases. But some areas of fat storage on the body are worse than others.

In general, women have greater absolute body fat percentages than men. Typically, women carry more fat around the legs, hip and buttocks, as well as the chest and upper arms. Women have more subcutaneous fat – the fat you can pinch under your skin – while men typically have more visceral fat, which is stored in and around the abdominal organs.

People who have greater fat stores around their butt and thigh (glutealfemoral) regions are at lower risk of chronic diseases, such as diabetes and heart disease, than those with greater fat stores around their middle.




Read more:
Explainer: overweight, obese, BMI – what does it all mean?


Why is belly fat more dangerous?

Excess fat around the tummy is subcutaneous fat – which you can pinch – as well as visceral fat, which is in and around the organs in the abdominal cavity and only visible using medical scans. Researchers have found excess visceral fat storage is a significant risk factor for metabolic health complications of obesity such as type 2 diabetes, fatty liver and heart disease.

The fat around the organs is a different kind of fat.
from http://www.shutterstock.com

Fat cells in a healthy person are able to grow, recruit inflammatory cells to help reduce inflammation, and remodel themselves in order to allow for healthy body growth. But if there is excess fat tissue, these mechanisms don’t function as well. And with excess fat, the body can become resistant to the hormone insulin – which maintains our blood sugar levels.

Visceral (belly) fat secretes greater levels of adipokines – chemicals that trigger inflammation – and releases more fatty acids into the bloodstream. Whereas the fat cells in the leg region, and the pinchable, subcutaneous layers of fat around the middle, store fatty acids within themselves, rather than pushing them into the circulation.

The fat around the hips and legs is more passive, meaning it releases fewer chemicals into the body.




Read more:
Coffee companion: how that muffin or banana bread adds to your waistline


Just try to lose fat, anywhere

A recent weight-loss study that looked at where fat mass was lost found the area of fat loss didn’t change the risk factors for heart disease and stroke. The important thing was losing fat from anywhere.
While diet and exercise are unable to specifically target regions of fat depots, fat mass loss from anywhere can improve risk factors.

Online ads might tell you a magic workout machine will reduce fat in one particular area, but adipose tissue is unable to be targeted in the same way that we can target a specific muscle group.

Total loss of fat mass, through a healthy diet and exercise, is the best outcome for overall health and reducing either the symptoms of chronic disease (such as diabetes) or the risk of developing disease such as diabetes or heart disease.The Conversation

Evelyn Parr, Research Fellow in Exercise Metabolism and Nutrition, Mary MacKillop Institute for Health Research, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Health Check: how long are you contagious with gastro?



File 20180817 165964 hrv13m.jpg?ixlib=rb 1.1
If this is you, stay away.
From shutterstock.com

Vincent Ho, Western Sydney University

There’s no way you’d want to go to work when you’ve got the telltale signs of gastro: nausea, abdominal cramps, vomiting and diarrhoea. But what about when you’re feeling a bit better? When is it safe to be around colleagues, or send your kids to school or daycare?

The health department recommends staying home from work or school for a minimum of 24 hours after you last vomited or had diarrhoea. But the question of how long someone is contagious after recovering from gastro is a very different question.

What causes gastro?

To better understand how long you can be contagious with gastro, we need to look at the various causes.

Viruses are the most common causes of gastro. Rotavirus is the leading cause in infants and young children, whereas norovirus is the leading cause of gastro in adults.

There are around 1.8 million cases of norovirus infection in Australia each year. This accounts for almost 40% of the total cases of gastro.

Bacterial gastroenteritis is also common and accounts for around 1.6 million cases a year. Of those cases, 1.1 million come from E. coli infections. Other bacteria that commonly cause gastro include salmonella, shigella and campylobacter. These bacteria are often found in raw or undercooked meat, seafood, and unpasteurised milk.

Parasites such as giardia lamblia, entamoeba histolytica and cryptosporidium account for around 700,000 cases of gastro per year. Most of the time people recover from parasitic gastroenteritis without incident, but it can cause problems for people with weaker immune systems.




Read more:
Health Check: I feel a bit sick, should I stay home or go to work?


Identifying the bug

Most cases of diarrhoea are mild, and resolve themselves with no need for medical attention.

But some warrant further investigation, particularly among returned travellers, people who have had diarrhoea for four or five days (or more than one day with a fever), patients with bloody stools, those who have recently used antibiotics, and patients whose immune systems are compromised.

Most cases of gastro will resolve on their own.
From shutterstock.com

The most common test is the stool culture which is used to identify microbes grown from loose or unformed stools. The bacterial yield of stool cultures is generally low. But if it does come back with a positive result, it can be potentially important for the patient.

Some organisms that are isolated in stool cultures are notifiable to public health authorities. This is because of their potential to cause serious harm in vulnerable groups such as the elderly, young children, pregnant women and those with weakened immune systems.

The health department must be notified of gastro cases caused by campylobacter, cryptosporidium, listeria, salmonella, shigella and certain types of E.coli infection. This can help pinpoint outbreaks when they arise and allow for appropriate control measures.

You might feel better but your poo isn’t

Gastro bugs are spread via the the faecal-oral route, which means faeces needs to come into contact with the mouth for transmission to occur.

Sometimes this can happen if contaminated faecal material gets into drinking water, or during food preparation.

But more commonly, tiny particles of poo might remain on the hands after going to the toilet. Using toilet paper to wipe when you go to the toilet doesn’t completely prevent the contamination of hands, and even more so when the person has diarrhoea.

The particles then make their way to another person’s mouth during food preparation or touching a variety of contaminated surfaces and then putting your fingers in your mouth.

After completely recovering from the symptoms of gastro, infectious organisms can still be shed into stools. Faecal shedding of campylobacter, the E. coli O157 strain, salmonella, shigella, cryptosporidium, entamoeba, and giardia can last for many days to weeks. In fact, some people who have recovered from salmonella have shed the bacteria into their stools 102 days later.

Parasites can remain alive in the bowel for a long period of time after diarrhoea finishes. Infectious cryptosporidium oocysts can be shed into stools for up to 50 days. Giardia oocysts can take even longer to be excreted.

So, how long should you stay away?

Much of the current advice on when people can return to work, school or child care after gastro is based on the most common viral gastroenteritis, norovirus, even though few patients will discover the cause of their bug.

For norovirus, the highest rate of viral shedding into stools occurs 24 to 48 hours after all symptoms have stopped. The viral shedding rate then starts to quickly decrease. So people can return to work 48 hours after symptoms have stopped.

Yes, viral shedding into stools can occur for longer than 48 hours. But because norovirus infection is so common and recovery is rapid, it’s not considered practical to demand patients’ stools be clear of the virus before returning to work.

Children in a day care setting are vulnerable to gastro outbreaks.
From shutterstock.com

While 24 hours may be appropriate for many people, a specific 48-hour exclusion rule is considered necessary for those in a higher-risk category for spreading gastro to others. These include food handlers, health care workers and children under the age of five at child care or play group.




Read more:
Health Check: what to eat and drink when you have gastro


If you have a positive stool culture for a notifiable organism, that may change the situation. Food handlers, childcare workers and health-care workers affected by verotoxin E.coli, for example, are not permitted to work until symptoms have stopped and two consecutive faecal specimens taken at least 24 hours apart have tested negative for verotoxin E. coli. This may lead to a lengthy exclusion period from work, possibly several days.

How to stop the spread

Diligently washing your hands often with soap and water is the most effective way to stop the spread of these gastro bugs to others.

Consider this: when 10,000 giardia cysts were placed in the palm of a hand, handwashing with soap eliminated 99% of them.

<!– Below is The Conversation's page counter tag. Please DO NOT REMOVE. –>
The Conversation

To prevent others from becoming sick, disinfect contaminated surfaces thoroughly immediately after someone vomits or has diarrhoea. While wearing disposable gloves, wash surfaces with hot water and a neutral detergent, then use household bleach containing 0.1% hypochlorite solution as a disinfectant.

Vincent Ho, Senior Lecturer and clinical academic gastroenterologist, Western Sydney University

This article was originally published on The Conversation. Read the original article.